In part 1 we looked at what design tokens are, touched briefly on how the concept has evolved through different toolsets, and looked at some of the benefits they bring, as well as some areas of concern where they don’t quite solve the whole problem of design yet. In this post we’ll look at how we’ve addressed a few of those concerns in FAST.
Designers solve many different problems across various categories of experiences. Many of us on FAST have found over the years working on numerous component frameworks that no matter how much time and planning is put into defining a system of guidance and components, there will inevitably be scenarios that challenge it. If the system is too limiting, consuming teams will go with whatever makes sense for the constraints involved at the time, often ignoring the system entirely, or building in such a way that when the system changes, they are left behind.
Design systems implemented primarily through design tokens don’t go far enough in conveying the reasons behind — and relationship between — individual tokens. They are typically a big graph of manually-derived values with a very specific way they are expected to be used. We sought to build a framework for design decisions that describes the design intent, with hooks that allow for scaling to any unique needs of each problem being solved. Individual decisions can be remixed to create new but cohesive experiences. Reducing friction in the system speeds up product development. Designing the system for flexibility keeps the product more aligned with the system intent, with specific and apparent “earned” variations.
Another goal of this system is to evolve the conversation and expectations of design. Many expressions of the design decisions open possibilities to better support individual humans using your product, not limiting the experience to a one-size-fits-some visual design. It’s a new frame of reference compared to traditional design, but the possibilities and advantages become apparent once you’re familiar with the foundation.
FAST introduced an industry-first design token concept called Adaptive UI. Adaptive design tokens, backed by a “recipe”, produce a variable result by calling a function (the “algorithm”) with input parameters, which may be other design tokens. This structure is not much different than the typical chain or aliasing of plain design tokens.
A straightforward example of the need for an adaptive recipe is for the color of “hint” (or “secondary”, “placeholder”, or “metadata”) text. Due to accessibility requirements, text color must always meet 4.5:1 contrast relative to its container’s fill color.
There are many different possible background colors, including main app surfaces, layering, flyouts or dialogs, cards, and banners. There are also gradients and images, and of course both light and dark mode.
The recipe in this example, or the way you would describe the need or decision, is simply to find a color that meets contrast. The recipe is not “grey-40” or some fixed color palette reference that works for only one scenario. Manually picking colors is not a scalable approach for guaranteed accessibility (among other factors). Limiting allowable background colors is not a realistic or scalable approach for design.
Here’s a sample “Person card” component with “hint” text applied to the second line, illustrated in different possible realistic containers:
To solve this in Adaptive UI, the “neutral-foreground-hint-recipe” relies on the “contrastSwatch” algorithm, which simply returns a color meeting the specified contrast. This recipe specifies the desired palette (neutral), the reference color (the container’s fill color), and the minimum contrast value ( 4.5 ). It’s wrapped by the “neutral-foreground-hint” token, which is applied to UI elements in the component styles. A component can now be implemented without knowing the context of where it can be safely used, and the design intent (contrast requirement) will always be correct.
Without Adaptive UI, one common attempt to solve this is with color pairs, like “banner-background” and “hint-on-banner-background”. The reason this doesn’t scale well is that any new use case requires a new color pair, and any components that are composed of “hint” text now need to be aware of the context in which they are used (in a banner, dialog, etc.). The various placements of the “Person card” above illustrate how this model breaks. We would need at least four different color pairs. Also, the component would need to know where it’s being placed, or the page author would need to know which design tokens the component is using to override them individually.
With this example we can see how adaptive recipes address the issue of breaking something downstream by changing a design token value — we don’t need to tweak the “foreground-hint” token to get it to work for a particular placement because the recipe determines the correct value based on context.
Let’s track how FAST addresses the original concerns around using design tokens from the first post:
Of course, we didn’t build Adaptive UI only for 4.5:1 text contrast. Color is a good example because it’s easy to visualize and advanced use cases can’t be solved with CSS functions, but the model scales to any other design values you might need — relational type ramps, adjustable density through padding and spacing, animation, focus indication, elevation treatment, strokes, shape, and whatever else you can imagine within the scope of web design.
Recipes can be built around the context of a group or set of tokens and values. Consider a type ramp. Often the tokens will be indexed like “font-size-1”, “font-size-2”, “font-size-3” or using t-shirt sizes like “typescale.small.size”, “typescale.medium.size”, and “typescale.large.size”. The values for each token were chosen with intention. There is a size which should be used for body text, and other sizes relate to the body text for a cohesive heading and caption design. In the tokens above, the body size is probably “2” or “medium”, but that’s not conveyed through the naming convention, which can lead to not knowing how to use them. One way to handle this is by aliasing the base type ramp sizes to usage, like “caption”, “body”, “heading”, etc. This is a start, but it would be even better if the underlying ramp conveyed the intent.
When it comes to text, it’s important that your content is clearly readable for your customers. It’s probably also desirable that the treatment is consistent across your experience. People all have different visual needs, but unfortunately sites and apps don’t apply font sizes consistently. A modular scale is a great way to relate font sizes to each other and it also helps solve this problem. If you aren’t familiar with this model, it’s probably what you’re approximating by eye anyway.
A modular scale requires some basic math, which can be implemented as a recipe. One input is a “base” type size token that represents the size of the body text. Another input represents the “scale” of the relationship between sizes on the ramp. The output are adaptive tokens that calculate individual sizes relative to the base, like “-1”, “+1”, or “+2”. Your site can maintain the design intent by keeping the “ scale ” constant but address your customer’s visual or accessibility needs by allowing them to adjust the “base” size for readability.
If your eyes get tired after a long day of staring at a monitor, what if you could increase the font size slightly in the evening hours while maintaining the overall relationship?
With an adaptive type ramp we can move on to more prominent issues than whether a 13, 14, or 15px font size is best for readability.
I think a model like this conveys the intent a lot more with a solid foundation in naming the ramp.
One of the key features of Adaptive UI is the ability to set the value of a design token for any component within the document hierarchy.
Let’s look at the density system for an example of nesting and overriding token values.
Here we have a marketing site where the design intent is to give the main content some room to breathe. We’ve increased the “density” by “+2”, which has increased the padding within the components and the spacing between them.
We also have a sign in form, but the design intent is for that to be secondary and follow the default “base” density of “0”. These are the same components — same Button and Text field — which can intelligently size themselves by adaptive density. But the density is not inherent to the sign in form, as it might be if we had built the Button and Text fields with t-shirt sizes like “small”, “medium”, and “large”. What if we want to use that same sign in form on a dedicated page? Here’s the same form with only a single design token value changed.
To further this example, perhaps there is a footer that is more compact and uses a “density” like “-1” or “-2”. Again, the same components would be smaller and closer together using the default system. This system works with relative or absolute values. That is, the sign in form could be fixed to use the “base” density, or it could be specified to use relative “-2” density compared to its container.
Implementing relative values like this using CSS functions would be messy because there is no way to get the parent value. You’d have to build it with fixed levels and wouldn’t be able to recompose your components.
For a longer discussion on type ramp sizes and density and the relationship between display scaling or zooming, see my planning for the density system.
I hid another scenario in here, which is that the buttons and fields have a subtle drop shadow. This comes from the “elevation-recipe”, which takes a parameter for the “size” of the elevation and creates a blurred box-shadow. Recipes are just design tokens and have the same features. Here is what we can do if we override the “elevation - recipe” itself and provide a new implementation.
The box shadow is now horizontally offset with no blur. Because we changed the recipe instead of the direct token value, we automatically get the same scale of elevation treatment in both appearances — the plain input fields have a smaller elevation “size” than the buttons.
This model of overriding values can be done for any token in any of the adaptive systems you use or create.
Recipes can support light and dark theming as we saw with “hint” text, or more advanced theming like custom elevation shadows.
There’s another scenario in the previous example, which is the accent color used on the buttons. In the marketing site the default brand color used for the CTAs is green. On the sign in form the accent color has been overridden to blue.
One of the issues I mentioned in the first post is the loss of underlying decisions. In that example we look at different colors of buttons and how the application of rest and hover state is inconsistent. One possible model using Adaptive UI is similar to the “hint” text color example at the start of this post, but in this case for a set of related colors, using the “contrastAndDeltaSwatchSet” algorithm.
The design decision is that buttons darken on hover. This is captured in the value for the “accent-fill-hover-delta” token, “2”, meaning 2 steps darker on the “accent-palette” (it’s helpful to visualize a palette going from white to black, left to right). The decision for the “rest” color is to have contrast of at least 5.5:1, and I’ve added here that the “active” state is lighter than “rest” with a value of “-2”.
The “accent-palette” recipe gets the “accent-base-color” token as a parameter, so we can simply change that single token and the other decisions flow automatically as expected. Regardless of the accent color, the hover and press states use the same design decisions, but with colors derived using the specified base color.
So we can finally answer that question about colors for an orange button:
Just to note, a palette in Adaptive UI is not built for human pattern recognition because the recipes are applying their rules rather than a human needing simplicity and consistency — that is, knowing there are always 10 colors and colors up to “40” are contrast-safe against white, etc. I’ve simplified the palette here to align it more with the initial concern, but under the default settings there are many more colors. In fact, what’s most important, since some recipes use deltas or offsets to pick stateful colors, is the contrast between each color on the palette. If you want your red and green buttons to feel the same when you hover or press them, you need consistent contrast between the swatches. Someday I’ll write a whole post about color palettes.
Now we can check off using underlying design decisions to produce token values.
The “adaptive” part of the recipes also allows for the output to change in real time according to any input you can imagine — for example, weather, time of day, favorite color, unique accessibility needs, unread email count, or stock market performance. The design token infrastructure is built on an observable pattern, so recipes receive notification when a dependent token changes.
Extending the “hint” text contrast example from earlier, we used the value of “4.5” for standard AA contrast requirements. What if we wanted to support the “prefers increased contrast” preference and a WCAG AAA rating with “7:1” contrast? With the color recipe we can simply use a “foreground-hint-contrast” token and change the value from “4.5” to “7” when someone enables that option, everywhere that recipe is used will update to the new color automatically.
Hopefully it’s becoming apparent how solving the next wave of key design and accessibility problems requires algorithms and effort, and how we’ll never get there with traditional design definitions, even as we start using design tokens.
In part 3 (coming soon) we’ll look at ways to solve some of the remaining concerns with an idea I’ve been calling “modular styling”, some design-to-code tooling, and improvements using the JSON tokens format.