Learning
This week I continued digging into designing for human-centric AI experiences. Here are some terms and concepts I’ve come across latley.
Levels of automation of decision and action selection
In 1978, Bill Verplank and Thomas Sheridan (human factors and systems engineering researchers) were talking about how automation is not ‘all or nothing’, that is, automation is not only a matter of either automating a task entirely or not, but to decide on the extent of automating it. There are 10 levels of levels of automation of decision and action selection.
Low level 👇
Level 1: The computer offers no assistance, and humans make all decisions and actions. There is no AI used in this step.
Level 2: The AI offers a complete set of alternatives for decisions/actions. You can see this with Google Search, where the AI provides a choice of results.
The AI narrows down the selection to a few alternatives. Netflix does this by showing you a personalized set of relevant movies.
Level 4: The AI suggests one decision, for example, Grammarly suggesting how to structure a sentence.
Level 5: The AI executes the suggestion if approved by the human, for example, asking Alexa to set an alarm.
Level 6: The AI allows the human a restricted time to veto before the automatic decision.
Level 7: The AI executes automatically and informs the human, for example, robots in warehouses.
Level 8: The AI informs the human only if asked, for example, spam filters.
Level 9: The AI informs the human only if the system decides to, for example, credit card fraud detection.
Level 10: The AI decides everything and acts while ignoring the human. Level 10 is full autonomy. We don’t yet have any systems that operate without any human intervention.
High level ☝️
Explainable AI (XAI)
Explainable AI (XAI) is a research field that studies how AI decisions and data driving those decisions can be explained to people in order to provide transparency, enable assessment of accountability, demonstrate fairness, or facilitate understanding.
Adjustable Interfaces vs Probabilistic interfaces
Adjustable interfaces rely on direct user input, while probabilistic interfaces rely on predictions and machine learning. They aren’t just different approaches; they represent fundamentally different philosophies about user agency.
Source: Cat Noone
Bookmarks
- Introduction to Explainable AI for designers
- People + AI Guidebook - A set of methods, best practices and examples for designing with AI (Google)
- HAX Toolkit - A toolkit for teams building user-facing AI products (Microsoft)
- AI-human interaction - Guidlines and framework for using AI (Gitlab)
- Beyond Chatbots and Prompts: User-Centric AI Experience
- The Interface Illusion
The Ionosphere
At the opposite end of the technology spectrum is shortwave radio.
In early 2024 One Deck Pete and I started a new radio project, Downbeat on Shortwave. I recorded a 15 minute mix, then sent it to Pete. In a call and response fashion, he recorded a 15 minute reply, sent it back to me and so forth. Kind of like mix-tape pen pals, sharing music between London and Perth. It took us a year to assemble a 60min episode. Hopefully we manage another one in 2025!
A little different to average internet radio show, this broadcast was sent to the ionosphere, then rebounded back to earth via Shortwave Gold, a shortwave radio station based in Germany that broadcasts across Europe and beyond depending on the weather, solar conditions and the time of day where you listen.
Listen back to the clean studio mix and the shortwave radio, “via the ionosphere” mix on Pete’s blog.