You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our telecon today I took the action to open an issue around getting feedback and testing of our solutions as they begin landing in browsers. There was discussion around timing of when this should occur, how it should be conducted and the correct venue for this.
To try and help with the conversation let's utilize <selectlist> as a concrete example for meaningful discourse on how best to approach this. As discussed in the meeting there are a variety of reasons and ways in which you can gather feedback. First let's look at reasons:
What we want to learn
Are there use-cases that the author wasn't able to achieve?
Are the solutions are they accessible?
Did the author find building their solutions to be complicated?
What was the satisfaction in the solution (CSAT).
When we want to learn this
In my opinion, we would want to conduct this research when:
There are no major outstanding issues for <selectlist> in Open UI nor the WG or WHATWG where the formal specification is landing that would impact the stability of the implementation
There is an implementation in at least one browser behind an opt-in solution (eg: flags, etc)
We would want the results of this research prior to a browser shipping to stable
How to conduct this research
There are numerous venues that each have their own strengths and weaknesses to providing valuable insights to Open UI. Some of which are:
User Studies: These are formal user research firms where the user or author are guided through specific tasks that shine lights on specific ways in which they utilize <selectlist> and issues that they find. The screens are typically recorded and transcribed to make finding answers easier.
Pros: This allows us to test end-users as well as developers to understand their opinions on the feature. It will be comprehensive and we can provide specific tasks which is especially valuable for developers since these will be brand new features. It can answer all of the above questions but enables us to heavily focus on 3 & 4.
Cons: This is not cheap so it's probably outside of Open UI's scope to fund; that would require us to rely on a member doing this research. While the member will find business value from doing this research, often the raw research is not made publicly available for legal reasons. Additionally, the amount of users that can be involved will be limited since it's a high touch process.
Request for demos: These are informal venues where the community group can work with various coding solutions, such as CodePen challenges to encourage developers to build cool solutions in contests.
Pros: This will result in a lot of output examples of the new feature and the Open UI community group can review the various results which will allow us to somewhat answer questions 1 and answers to 2. This will be cheaper and thus Open UI can probably make it happen.
Cons: We will need to setup in-depth documentation and guidance for developers to ensure that they know how to use the feature prior to asking them to produce cool examples. There is no built-in way in which to provide feedback on questions 1, 3 & 4.
Open UI's goal is to make it so that we solve the majority of use-cases and they are accessible by default across all form-factors. As such we should have concrete success metrics that:
There are no <selectlist> usecases that aren't achievable
The solution is accessible across all form-factors
Authors and users provide a CSAT of 3.5+
This is meant to be a kickoff to begin formulating how we can go about ensuring that we're producing a successful solution, so please provide your feedback on any questions I may be missing, principles or timelines, success metrics?
The text was updated successfully, but these errors were encountered:
gregwhitworth
added
the
Misc
It doesn't fall into one of the labels below but we want to denote that it was seen
label
Sep 29, 2023
There hasn't been any discussion on this issue for a while, so we're marking it as stale. If you choose to kick off the discussion again, we'll remove the 'stale' label.
In our telecon today I took the action to open an issue around getting feedback and testing of our solutions as they begin landing in browsers. There was discussion around timing of when this should occur, how it should be conducted and the correct venue for this.
To try and help with the conversation let's utilize
<selectlist>
as a concrete example for meaningful discourse on how best to approach this. As discussed in the meeting there are a variety of reasons and ways in which you can gather feedback. First let's look at reasons:What we want to learn
When we want to learn this
In my opinion, we would want to conduct this research when:
<selectlist>
in Open UI nor the WG or WHATWG where the formal specification is landing that would impact the stability of the implementationHow to conduct this research
There are numerous venues that each have their own strengths and weaknesses to providing valuable insights to Open UI. Some of which are:
<selectlist>
and issues that they find. The screens are typically recorded and transcribed to make finding answers easier.Open UI's goal is to make it so that we solve the majority of use-cases and they are accessible by default across all form-factors. As such we should have concrete success metrics that:
<selectlist>
usecases that aren't achievableThis is meant to be a kickoff to begin formulating how we can go about ensuring that we're producing a successful solution, so please provide your feedback on any questions I may be missing, principles or timelines, success metrics?
The text was updated successfully, but these errors were encountered: