graph view:
×  ⁝⁝ 
Graph Language Processing Settings:

 
Specify the settings for your text-to-network conversion algorithm for this graph.
Lemmatizer: ?
Every word will be converted to its lemma (e.g. bricks > brick, taken > take) and will be shown on the graph as a node. Set to your language for more precise results. Switch off to turn off lemmatization and add your custom stop words list below.
 
Show on Graph:   Double Brackets [[]]:  Categories and Tags:   
Stop Words: ?
List the words, comma-separated (no spaces), that should not appear in the graph, in addition to your default global stopwords list.
Example: is,the,as,to,in

 
Synonym Nodes: ? unmerge all
If you'd like some words to appear as one node on the graph, in addition to your default global synonyms list, list the synonyms, one per line.
Example:
machine:machine learning
learning:machine learning

 

×  ⁝⁝ 
Dynamic Graph Settings


See the dynamic evolution of this graph: scroll or "play" the text entries to see how the text propagated through the network graph over time.

the final graph

highlight propagation edge
show visible statements only



 
Play the Graph


current speed of the player:
0 2000

one statement at a time


×  ⁝⁝ 
Export the Data


Network Graph Images:

The graph images for publishing on the web or in a journal. For embeds and URLs use the share menu.
PNG (Image)  SVG (Hi-Res)

Visible Statements (Tagged):

Export the currently filtered (visible) statements with all the meta-data tags (topics, sentiment).
CSV (Spreadsheet)   MD (e.g.Obsidian)  

Network Graph Data:

The raw data with all the statistics for further analysis in another software.
JSON  CSV  Gexf (Gephi)

All the Text:

Plain text used to create this graph without any meta-data.
Download Plain Text (All Statements)
× ⁝⁝ 
Share Graph Image

 
Share a non-interactive image of the graph only, no text:
Download Image Tweet
 
Share Interactive Text Graph

 

 
×  ⁝⁝ 
Save This Graph View:

 

×  ⁝⁝ 
Delete This Graph:

 

×  ⁝⁝ 
Project Notes:
InfraNodus
Top keywords (global influence):
Top topics (local contexts):
Explore the main topics and terms outlined above or see them in the excerpts from this text below.
See the relevant data in context: click here to show the excerpts from this text that contain these topics below.
Tip: use the form below to save the most relevant keywords for this search query. Or start writing your content and see how it relates to the existing search queries and results.
Tip: here are the keyword queries that people search for but don't actually find in the search results.

- Similar set of thoughts: [[Article notes: Can we forget about [[gamification]] once and for all?]]

   edit   deselect   + to AI

 

- I played a lot of video games growing up. I played by myself, with friends, with strangers online. My parents always thought that it was a waste of time. Here are [[some of my favorite video games]].

   edit   deselect   + to AI

 

- They probably asked themselves many times, "what is it about these games that make him play so much?" But that's not what I was thinking to myself. I was thinking about how much I wanted to play, and how I could play it as well as I could.

   edit   deselect   + to AI

 

- ## Gamification tends to ignore [[behavior design]], or uses [[lazy behavioral science]]

   edit   deselect   + to AI

 

- [[[[Kurt Lewin]]'s Equation]] says that $$B=f(P,E)$$. In other words, "Behavior is a function of the interplay between [[person-side factor]]s and the contextual factors of their environment"

   edit   deselect   + to AI

 

- [[An app designer has control over a person's digital context]] "The designer has partial to complete control over the information that is presented to the user, how that information is framed, what choices are given to users, and what information people are paying attention to. As long as the user is paying attention to the app, the designer exerts influence over the user's behavior."

   edit   deselect   + to AI

 

- "The thing to remember is that game designers have been designing for digital behavior change for longer than just about anyone", so they have have recognized this on some level since the beginning.

   edit   deselect   + to AI

 

- The player of a game is interacting with the in-game world in order to [get what they want and further their goals]([[[[Expectancy Value Theory]] and its role in [[gamification]]]])

   edit   deselect   + to AI

 

- Games thoughtfully designs rules and interactions to influence how you get what you want[++](((vq7ICOkbp)))

   edit   deselect   + to AI

 

- {{[[TODO]]}} While "Games thoughtfully designs rules and interactions to influence how you get what you want[++](((vq7ICOkbp)))", gamification designers act like [[points]], [[badges]], and [leaderboards]([[[[leaderboard]]s are generally ill-advised or poorly executed]]) work in a vacuum and can just be placed on top of what already exists and magically create engagement.

   edit   deselect   + to AI

 

- We need to consider the role of goals more broadly so that mechanics are simply perceived as being [helpful in furthering their goals](((K2Qwi4e7n))).

   edit   deselect   + to AI

 

- Before taking "inspiration" from games, gamified apps, and their mechanics, game designers should consider [the context in which those mechanics were implemented](((vq7ICOkbp))).

   edit   deselect   + to AI

 

- ## Gamification isn't being inspired by games and behavioral science, but rather, by other gamification. [[There could be many genres of [[gamification]]]], but we're essentially stuck with the Foursquare genre.

   edit   deselect   + to AI

 

- Designers aren't being inspired by games, but rather, by other gamification that has been copy/pasted since FourSquare. Frankly, I think most designers of gamification don't even play games. What FourSquare did in 2009 is basically what’s going on today.

   edit   deselect   + to AI

 

- [According to a report by Gartner in 2012,](https://centrical.com/will-80-of-gamification-projects-fail/) by 2014, 80 Percent of Current Gamified Applications Will Fail to Meet Business Objectives Primarily Due to Poor Design”

   edit   deselect   + to AI

 

- Elaborating further on his claims, the report's author said: __"The focus is on the obvious game mechanics, such as points, badges and leader boards, rather than the more subtle and more important game design elements, such as balancing competition and collaboration, or defining a meaningful game economy. As a result, in many cases, organizations are simply counting points, slapping meaningless badges on activities and creating gamified applications that are simply not engaging for the target audience. Some organizations are already beginning to cast off poorly designed gamified applications.”__ #quote

   edit   deselect   + to AI

 

- In other words:

   edit   deselect   + to AI

 

- Gamification needs to broaden its toolbelt and get more nuanced in its understanding of the context surrounding game mechanics in games and the underlying behavioral science that drives it all.

   edit   deselect   + to AI

 

- ## Questions like "is gamification effective?" miss the point.

   edit   deselect   + to AI

 

- "When I hear people say "gamification does or doesn't work," I have the same reaction that many people would have if someone says "design doesn't work." Gamification isn't one thing as I define it, but rather an interplay between game design, human computer interaction, behavioral science, and [[behavior design]]."

   edit   deselect   + to AI

 

- Games aren't asking themselves whether they should use a leaderboard or not. They are [asking themselves how people should/could be playing the game](((vq7ICOkbp))), and how mechanics fit into an overall system meant to deliver an experience is far more interesting.

   edit   deselect   + to AI

 

- Games aren't made up of the additive effect of a bunch of individual mechanics working independently from each other. Instead, they function as a system of interacting parts where the player expresses agency in how they play.

   edit   deselect   + to AI

 

- Games don't implement mechanics for the sake of those mechanics. [[points]], [[badges]], and [[leaderboard]]s aren't effective in a vacuum. Mechanics affect change in the context of [the problems they are attempting to solve.]([[relations]])

   edit   deselect   + to AI

 

- ![](https://cdn-images-1.medium.com/max/1600/1*uuFBEcNIG7yZ2bgF4nKvEg.jpeg)

   edit   deselect   + to AI

 

- {{[[TODO]]}} [[stub]]

   edit   deselect   + to AI

 

- __This is a reproduction of an article I posted on Medium a while ago. I decided to connect it up within my public Roam__

   edit   deselect   + to AI

 

- ### What’s Appening? Gamification and Playfulness in Headspace

   edit   deselect   + to AI

 

- Imagine you are on a date and at the end of the night you want a goodnight kiss. You ask, "would you like to kiss me?” They say yes, but let’s say you weren’t too sure if they would, so you sweeten the deal and say "I’ll give you $10 if you kiss me.” Do you think they would still be interested?

   edit   deselect   + to AI

 

- It’s a silly thought experiment, but it illustrates an important point- if someone is already motivated to do something, adding a reward like money or points won’t make them more motivated. They are probably not thinking, "Oh I get to kiss you AND earn $10? Big win!” [[self-determination theory]]

   edit   deselect   + to AI

 

- [Behavioral economics research shows that adding external rewards often backfire if you’re already intrinsically motivated.](https://www.aeaweb.org/articles?id=10.1257/jep.25.4.191) Why might that be?

   edit   deselect   + to AI

 

- Well, the first reason could be that paying money for intimacy is generally frowned upon, but that’s example specific.

   edit   deselect   + to AI

 

- Alternatively, your date could be replacing the question of whether they want to kiss you with the less favorable question of whether $10 is worth a kiss. That can be generalized to more situations.

   edit   deselect   + to AI

 

- Being offered an external reward takes away one’s ownership of the decision. People can justify it as doing it for the money, and if they’re getting paid, they better be paid enough.

   edit   deselect   + to AI

 

- [In studies of volunteers who collected donations, the group that wasn’t paid at all collected the most donations, followed by the high pay group, followed by the low pay group.](https://academic.oup.com/qje/article-abstract/115/3/791/1828156)

   edit   deselect   + to AI

 

- [Rewards can work, but they generally only work for short rather than long term change.](https://www.aeaweb.org/articles?id=10.1257/jep.25.4.191) As I wrote in my [previous article,](https://uxplanet.org/whats-appening-adding-behavioral-science-to-minimalist-design-d179a67f2b75)

   edit   deselect   + to AI

 

- "Enhancing and enabling intrinsic motivation is one of the best ways to create long-term behavior change. According to [[self-determination theory]],[*](https://books.google.com/books?hl=en&lr=&id=SePipgh2z7kC&oi=fnd&pg=PA416&dq=self-determination+theory&ots=_NinodN0zT&sig=ToUjQg_nkaSQ7jMByaJ_DhgMa9M#v=onepage&q=self-determination%20theory&f=false) our intrinsic motivation is based on three main drives for competence, autonomy, and relatedness.

   edit   deselect   + to AI

 

- [[competence]] is about a need to improve our skills and knowledge.

   edit   deselect   + to AI

 

- [[autonomy]] describes wanting to personally identify with the task at hand and be in control of our own actions.

   edit   deselect   + to AI

 

- [[relatedness]] has to do with wanting to connect to others through our actions.”

   edit   deselect   + to AI

 

- The big reason that rewards in an app work for short rather than long-term behavior change is that if you stop caring about or receiving the reward, then you stop doing the behavior that it’s trying to encourage. [Variable ratio rewards like I’ve described in my article on Tinder](https://uxplanet.org/whats-appening-how-tinder-influences-you-adb0c0e0c917) can cause the behavior to last longer, but eventually it’ll stop. Even so, when an app is gamified, it usually relies on externally rewarding desired behaviors with points, badges, achievements, and leveling up. This doesn’t create meaningful change, and leads you to eventually stop using the app when you get tired of it.

   edit   deselect   + to AI

 

- "definition:: I loosely define gamification as the application of principles of game design and behavioral science to influence user behavior so they [voluntarily use the app]([[Products are fundamentally voluntary]])" What so many gamification designers seem to miss is that games aren’t just about points and achievements. They’re about being fun and playful.

   edit   deselect   + to AI

 

- #c they're doing [[lazy [[gamification]]]]. [[Most [[gamification]] is pretty bad]].

   edit   deselect   + to AI

 

- Headspace is a guided meditation app that uses game elements that increase and enable intrinsic motivation to create long-term change.[++]([[self-determination theory]]) In this article, I’m going to look at how they do that, as well as how they might be able to do that better.

   edit   deselect   + to AI

 

- ### **Creating a playground**

   edit   deselect   + to AI

 

- [Playfulness is about facilitating the freedom to explore and fail within boundaries.](https://link.springer.com/chapter/10.1007/978-3-319-10208-5_1) Players can use the app how they want, engaging with what they want within the app, and establishing their own constraints. Picture a schoolyard playground- there are a lot of fun things that you can choose to interact with. However, if you were forced to use the monkey bars, it might just feel like exercise.

   edit   deselect   + to AI

 

- According to ["A RECIPE for Meaningful Gamification”:](https://link.springer.com/chapter/10.1007/978-3-319-10208-5_1)

   edit   deselect   + to AI

 

- "The concept of a space where people can roam, explore, see where others are, engage with those others, and set temporary rules and goals can create a gamification space that people engage with because it’s playful.”

   edit   deselect   + to AI

 

- Headspace has the roam and explore part down pat. As you can see, you have a lot of options (45 meditation packs, 54 singles, 6 minis, 8 meditations for kids). Normally an app designer might worry about choice paralysis, [where users have so many options that they don’t pick any.](https://keithdwalker.ca/wp-content/summaries/m-p/Paradox%20of%20Choice.Schwartz.EBS.pdf) However, Headspace bunches its packs together into categories (for example, the packs for creativity, productivity, and finding focus are all under "Work & Performance”). This makes it so you first just look for a category that’s interesting, and then you pick a pack within that category that you like, which reduces the options you need to pick from at each decision point.

   edit   deselect   + to AI

 

- However, they don’t literally ask you to make each of those decisions in order, which is important. They present all of this information to you at once, inviting you to scroll and explore. You’re encouraged to try any that’s interesting to you. If they wanted to make this exploration more playful and fun, they could add art to the cover of each pack instead of just sorting them by color. Another app that has the same layout to encourage exploration is Netflix. Look familiar?

   edit   deselect   + to AI

 

- ![](https://cdn-images-1.medium.com/max/1200/1*fmhCalhjDsf5u6H4PVqWow.jpeg)Previews of 3 different screens within the app. There’s a lot to discover!![](https://cdn-images-1.medium.com/max/1600/1*rjsaZsts3dmCPNfug-niOg.jpeg)The space for exploration in Headspace and Netflix

   edit   deselect   + to AI

 

- The social interaction in Headspace is "passive,” opting to show you your friends’ meditation stats instead of give you ways to interact with friends. It does not show you which meditations they’ve done, which is understandable for privacy reasons. You can "nudge” your friends to meditate more, but instead of sending a simple "poke” notification, it asks you to write out a text or email without a template, which is a little cumbersome. Due to the lack of interactivity, this "passive” social network does not encourage you to engage more heavily with the app. Headspace needs "active” social interaction so users can actually connect to others by engaging the service more deeply.

   edit   deselect   + to AI

 

- If I do a meditation that I think my friend would like, I should be able to send it to them. I should be able to make meditation groups, or have groups made for me by Headspace, where we want to complete challenges together each week, cooperating to "meditate for 500 minutes” or "try every Single.”

   edit   deselect   + to AI

 

- To really nail down playfulness, they need to give users the ability to "set temporary rules and goals.” You know how kids on a playset might try to see who can go down the slide the fastest, start saying that certain areas are theirs, or climbing to the top just because they can? Same concept.

   edit   deselect   + to AI

 

- Headspace doesn’t do this explicitly, but they could. They could let you "favorite” meditations that you like and put them into playlists. They could give you a template for setting up challenges for yourself. If you set your own goals, you’ll feel more autonomy and be more intrinsically motivated to complete them. Perhaps they could let you write down a short goal for the week that pops up every time that you open the app. [This allows the app to use your desire to behave consistently with your self image for your betterment.](https://www.amazon.com/Influence-Psychology-Persuasion-Robert-Cialdini/dp/006124189X)

   edit   deselect   + to AI

 

- ### Storytelling that engages your sense of self

   edit   deselect   + to AI

 

- The use of narrative storytelling in video games is another way that they draw you in without explicitly rewarding you. Narratives allow the player to draw connections between their past experiences, present reality, and future benefits. The challenge comes with giving the player a sense of control and autonomy over the story, rather than just being along for the ride.

   edit   deselect   + to AI

 

- [There are four main types of narrative that exist in games.](http://summit.sfu.ca/item/209)

   edit   deselect   + to AI

 

- An evoked narrative sets the game in a world that already exists, like real life.

   edit   deselect   + to AI

 

- Enacted narratives are non-interactive, think cut-scenes and videos that interrupt gameplay.

   edit   deselect   + to AI

 

- Embedded narrative is when the story is built into other game elements, like picking up items or exploring a new territory.

   edit   deselect   + to AI

 

- The most powerful type, however, is an emergent narrative. This allows the player to create the story by making choices that matter, inviting the player to identify with the story. Headspace works primarily through an embedded narrative, as the meditations you choose and the animations you watch determine the story that you’re exposed to. This type of narrative encourages the user to explore, because they know that they’ll unveil more by trying more.

   edit   deselect   + to AI

 

- ![](https://cdn-images-1.medium.com/max/1200/1*5WIH_ozNEJL3RXyCgQ_Xug.png)

   edit   deselect   + to AI

 

- There’s a little bit of emergent narrative that is shown to you in the "My Journey” tab on your profile, where it shows you the meditations you’ve done, the landmarks you’ve achieved, and the animations that you’ve unlocked. There could be more though. See the "profile picture” of the smiling cartoon creature? As far as I can tell, you can’t change it. But what if it changed every time that you did a meditation? If you did a creativity one, it might start painting, or if you do one of the running singles, it (you guessed it) runs. This cartoon creature is supposed to represent you, so if it acted like you as well, that would add a playful piece of emergent narrative that connects your actions in real life to what’s happening in the app.

   edit   deselect   + to AI

 

- ### **Reflection to enhance learning**

   edit   deselect   + to AI

 

- If a gamification service wants the players to make real changes in their life, then adding reflection as a feature is invaluable.

   edit   deselect   + to AI

 

- If you want to utilize reflection well in your app, there are three main components: [description, analysis, and application.](https://pdfs.semanticscholar.org/289b/34d437ccb17fe2543b33ad7243a9be644898.pdf)

   edit   deselect   + to AI

 

- Description is about figuring out what happened

   edit   deselect   + to AI

 

- Analysis is about realizing its connections with your real life

   edit   deselect   + to AI

 

- application is about ideating about how to use what you’ve learned in new situations.

   edit   deselect   + to AI

 

- Nike+, a brilliant app in the gamification space, incorporates reflection directly into the user’s journey. After a run, it asks you how you feel, shows you a map of where you’ve run and your running speed, asks you what shoes you were wearing, and what sort of turf you were running on. Then they ask if you want to share your post-run reflection on social media.

   edit   deselect   + to AI

 

- Meditation is an act that’s basically begging to be reflected on. After you finish, Headspace could ask you how you felt on an emoji scale and if there were any particularly important thoughts that came to mind that you want to jot down. They could compile these thoughts into a "meditation journal” on your profile, adding an extra layer of emergent narrative. Alternatively, they could build it into the way your "Journey” is already displayed, so if you tapped on a meditation you completed that had a little notebook icon, you could read what you wrote.

   edit   deselect   + to AI

 

- [Reflection is done best when it’s done with others with a shared experience.](http://scottnicholson.com/pubs/completingexperience.pdf) You might miss something that others noticed, and vice versa. Headspace could allow you to share your reflections on Facebook, with people on your buddy list, with meditation groups that you or the app created, or with other people working on the same meditation pack as you. This sort of social reflection causes you to learn more and brings you closer to other people through the app, tying Headspace to relatedness, one of the three main drivers of intrinsic motivation, which will in turn cause individuals to use the app more.

   edit   deselect   + to AI

 

- ### **Doing rewards right**

   edit   deselect   + to AI

 

- What little Headspace does with rewards, they do right. Upon reaching certain milestones which aren’t clear to you ahead of time, you unlock animations that teach you how to enhance your meditation practice. This works (and does not reduce intrinsic motivation) for a few reasons.

   edit   deselect   + to AI

 

- The first is that the rewards are unexpected. Because you did not plan ahead for them, you can’t justify your actions as being just for a prize. As a result, they feel more like a celebration of your work, rather than payment.

   edit   deselect   + to AI

 

- The second is that it touches on a core driver of intrinsic motivation, the desire for competence. As you meditate more, you naturally improve. Once you start to feel like you might have hit a plateau, Headspace gives you new things to think about that allow you to improve and meditate more.

   edit   deselect   + to AI

 

- ### **Takeaways**

   edit   deselect   + to AI

 

- Games are engaging for more than just the rewards they give you

   edit   deselect   + to AI

 

- You can picture a playful environment by imagining a schoolyard playground, where you can see and engage with a variety of structures and people

   edit   deselect   + to AI

 

- A game’s narrative and the way it’s exposed can engage and motivate the player

   edit   deselect   + to AI

 

- Reflection is a key component of the learning process that invites the player to connect their actions in the game to the game’s real-life value and lessons.

   edit   deselect   + to AI

 

- If you’re going to use rewards, put a lot of careful consideration into how you’re going to do it. Feel free to contact me if you’re interested in how to do them right.

   edit   deselect   + to AI

 

- If I use a [[complex page]][++](((4t319eiua))), then I am essentially linking all of the ideas present within the overall page every time I mention that page.

   edit   deselect   + to AI

 

- In the example of this page, every time that I reference [[[[complex page]]s are one method for expressing semantic meaning]], it will also show up in the linked references for **complex page.**

   edit   deselect   + to AI

 

- However, this is unidirectional. It's not the case that every time that I reference [[complex page]], it will show up in the linked references for the page that we're on.

   edit   deselect   + to AI

 

- [[example]]: "[[[[puzzle game]]s for [[power user]] [[onboarding]]]] is a **complex page name** that tells Roam that whenever I reference that page, the block will also show up in the linked references for [[puzzle game]], [[power user]], and [[onboarding]]. I'm expressing a light hierarchy. "

   edit   deselect   + to AI

 

- I can say [[social [[self-efficacy]]]] to express that social self-efficacy is subordinate to self-efficacy more broadly.

   edit   deselect   + to AI

 

- {{[[TODO]]}} More ways of conveying semantic meaning through complex page names to come

   edit   deselect   + to AI

 

- "definition:: Continuous onboarding is an onboarding that never stops. Users are constantly being shown how to be better at using the app in an attempt to increase [[user involvement]]."

   edit   deselect   + to AI

 

- # In order for a continuous onboarding to come from the community, new users need to be funneled into a community that welcomes new and experienced users and is filled with people that fulfill nourishing [[[[community]] roles]].

   edit   deselect   + to AI

 

- {{[[TODO]]}} The community can be intentionally designed. The way that is designed currently encourages and discourages behaviors and roles, whether it was designed intentionally or not. More on this to come.

   edit   deselect   + to AI

 

- # There is a plethora of goals that need to be accounted for in a [[horizontal product]]

   edit   deselect   + to AI

 

- This is because [[[[user goal]]s change over time]] as [[the [[skill level]] of each user grows over time]], and there are [[[[individual difference]]s between people in prior [[user goal]]s]].

   edit   deselect   + to AI

 

- ## The app can only do so much to onboard new and experienced users

   edit   deselect   + to AI

 

- On its own, a horizontal product is likely incapable of adapting to the plethora of goals that evolve over time for a base of users who had different goals from each other to start with. This is a hard problem for apps to solve.

   edit   deselect   + to AI

 

- The app can only be so intelligent in recognizing when a user could learn something new.

   edit   deselect   + to AI

 

- The app can react to certain behavioral signals coming from the user:

   edit   deselect   + to AI

 

- [[user behavior]] could signal that the user is trying to do something in an inefficient way

   edit   deselect   + to AI

 

- [[user behavior]] could signal that the user has progressed to a skill level that's necessary to learn a more advanced technique

   edit   deselect   + to AI

 

- The app can react to declared preferences and clearly expressed **user goals.**

   edit   deselect   + to AI

 

- The user could do a search for information. This requires the user to at least have at least a fuzzy idea of what they are looking for.

   edit   deselect   + to AI

 

- The user can describe to the app in some way what their goals are. This can come from "Questions during the onboarding".

   edit   deselect   + to AI

 

- ## People are able to facilitate continuous onboarding because they are far more dynamic and responsive than a machine, which is necessary given that that there is a plethora of goals that need to be accounted for.

   edit   deselect   + to AI

 

- ## The app needs help from its users to create a continuous onboarding

   edit   deselect   + to AI

 

- ### The problem with reacting to declared preferences and clearly stated goals is that people often don't know what they're missing until they are shown. This is due to a failure of imagination.[++]([[the user may have a lack of imagination as to what [[user goal]]s they can accomplish]])

   edit   deselect   + to AI

 

- Other people are more helpful here than a pre-programmed app onboarding in providing inspiration for other users.

   edit   deselect   + to AI

 

- Some users may end up content with a static state of usage. [[The user should believe that their actions in the app lead to goal achievement]], and they believe it because they've figured it out for themselves.

   edit   deselect   + to AI

 

- There's often no need to worry about these users, but sometimes this is simply because [the user doesn't recognize situations for app usage]([[help the user to recognize situations for app usage]])

   edit   deselect   + to AI

 

- Some users are surprised and delighted when they are shown how to do things that they didn't know that they wanted to do. On the other side of the coin, some users end up frustrated that they can't do the things they didn't even know they wanted to do. When people are exposed to use cases they didn't know they wanted, the user is invited to escalate their level of [[user involvement]].

   edit   deselect   + to AI

 

- ### Content creators are able to show off powerful ideas that inspire these users. #[[[[community]] roles]]

   edit   deselect   + to AI

 

- In order to create a **continuous onboarding,** content creators need to work at varying levels (beginner, intermediate, hard) and for various use cases so the breadth of different users all have inspirational and instructional material.

   edit   deselect   + to AI

 

- Content creators need multiple ways that they can provide for other users. They can make videos or write articles, but can they create templates that anybody can use and go beyond simply the instructional into the immediately practical?

   edit   deselect   + to AI

 

- [[[[self-efficacy]] can be augmented to be [[social [[self-efficacy]]]] with community support]]

   edit   deselect   + to AI

 

- ### People are able to understand the meaning of behavioral signals from other users more effectively than a computer

   edit   deselect   + to AI

 

- People are able to understand what someone is saying when they say that they are struggling, provide more personalized instruction as to how to accomplish a specific goal, and troubleshoot.

   edit   deselect   + to AI

 

- In the community, we need people that will respond to the questions of others. #[[[[community]] roles]]

   edit   deselect   + to AI

 

- I hear people say all of the time that they don’t really care about the gamification in the apps they use. They pretty much ignore it.

   edit   deselect   + to AI

 

- Gamification should be made so that people WANT to pay attention and engage with the app because that furthers their goals.

   edit   deselect   + to AI

 

- If you take away nothing else from this page, just remember: people can choose to use your product or not. Understanding user goals and enabling them as best as possible makes that choice favorable, and is just good design.

   edit   deselect   + to AI

 

- What is a game?

   edit   deselect   + to AI

 

- According to [[Reality is Broken]]:

   edit   deselect   + to AI

 

- "All games share four qualities:"

   edit   deselect   + to AI

 

- "Goal"

   edit   deselect   + to AI

 

- "Something to focus on. It gives the player a sense of purpose."

   edit   deselect   + to AI

 

- "Rules"

   edit   deselect   + to AI

 

- "Inform the user know how they can achieve their goals. These rules empower players by giving them a sense of control over the world."

   edit   deselect   + to AI

 

- "Feedback System"

   edit   deselect   + to AI

 

- "Feedback shows the user how their current actions relate to their goals. This can help the user notice their own progression and learn the rules of the game."

   edit   deselect   + to AI

 

- ^^((zaruuQt6J))^^

   edit   deselect   + to AI

 

- "People can play a game or not, it’s their choice. This means that the user needs to accept the in-game goals as their own, and pursue those."

   edit   deselect   + to AI

 

- # ^^Just like games, [[Products are fundamentally voluntary]].^^

   edit   deselect   + to AI

 

- "People can always choose to use the product, use something else, or not use anything at all. That's their [default state of being]([[default behavior]]), and you're trying to get them to do something different in using your product."

   edit   deselect   + to AI

 

- [[Expectancy Value Theory]]

   edit   deselect   + to AI

 

- "Broadly, Expectancy Value Theory says that people will do a behavior if:"

   edit   deselect   + to AI

 

- "The task has a valued outcome"

   edit   deselect   + to AI

 

- This leads to "Voluntary Participation". People won't voluntarily do a task if they don't think that it's worthwhile.

   edit   deselect   + to AI

 

- "The relationship between performance of certain actions and achievement of a goal is clear"

   edit   deselect   + to AI

 

- This leads to "Voluntary Participation". If people don't believe that doing a task leads to the outcome that they are looking for, then why do said task?

   edit   deselect   + to AI

 

- "Rules" and "Feedback System"s can make the relationship between performance and goal achievement clear

   edit   deselect   + to AI

 

- "The person believes that by exerting effort, they will be able to perform the steps necessary to achieve the goal"

   edit   deselect   + to AI

 

- This leads to "Voluntary Participation". If people don't believe that they are capable of some action, then they won't like it as much.

   edit   deselect   + to AI

 

- [[example]]: if someone asked me to play a game of pickup basketball with them, I probably wouldn't be too enthusiastic about that because have they seen me attempt to play a sport?

   edit   deselect   + to AI

 

- # [[The user should believe that they are capable of performing actions within the app]][++](((shir6zGgG)))

   edit   deselect   + to AI

 

- # [[The user should believe that their actions in the app lead to goal achievement]][++](((pJwzQSsgi)))

   edit   deselect   + to AI

 

- # [[The user should believe that the app will help them achieve a goal that they actually have]][++](((N9SykWJJ2)))

   edit   deselect   + to AI

 

- Failure is inherent to a [[roguelike]]. In order to be successful, the roguelike needs the player to complete the following loop: start a run, make it as far as you can go, fail, and then repeat. How do roguelikes encourage the player to start over again when they fail? [[failure]]

   edit   deselect   + to AI

 

- When people think of gamification, they tend to think of points, badges, and leaderboards. They may make claims such as "Gamification is good for this and bad for that" or "gamification should only come in at X stage." This is [[lazy [[gamification]]]]. This is making a claim that gamification is a monolithic thing. Gamification can be done without feeling like gamification, without points, without badges, without extrinsic rewards.

   edit   deselect   + to AI

 

- ## Restarting after death is easy

   edit   deselect   + to AI

 

- When I hear people say "gamification does or doesn't work," I have the same reaction that many people would have if someone says "design doesn't work." Gamification isn't one thing as I define it, but rather an interplay between game design, human computer interaction, behavioral science, and [[behavior design]].

   edit   deselect   + to AI

 

- In Dead Cells, when you die, it immediately starts you over at the beginning without asking you if you'd like to try again.

   edit   deselect   + to AI

 

- [[There could be many genres of [[gamification]]]]

   edit   deselect   + to AI

 

- This is like in Netflix, where if you don't do anything at the end of an episode it will automatically move onto the next.

   edit   deselect   + to AI

 

- The user has the exact same choice as before- they can start the next run or they can quit. However, instead of asking the player to deliberate between those two options, it makes the default option to start again. The player has already started so quitting is turned into an effortful decision. [[choice architecture]]

   edit   deselect   + to AI

 

- "Some games reward failure, giving you a boost in your next life"

   edit   deselect   + to AI

 

- During each run, you're killing monsters and collecting money. When you die, it actually allows you to use some of the money that you had left when you died on your next run, making you feel as though you have already started and helping you move past the part at the beginning that you've done a thousand times more quickly. [[goal gradient effect]]

   edit   deselect   + to AI

 

- Some runs you get such a lucky start that you can't help but give it a shot. At the beginning of each run, they give you three random weapons. Sometimes, you get a combination that is perfect for you and you feel as though you have to play another run because who knows when you'll get another combination that good.

   edit   deselect   + to AI

 

-

   edit   deselect   + to AI

 

- In [[instructional design]]

   edit   deselect   + to AI

 

- definition:: Kurt Lewin's equation, $$B=f(P,E)$$, states that a person's behavior is a function of who they are as a person and the contextual factors of their environment.

   edit   deselect   + to AI

 

- # [[open-world experience]]

   edit   deselect   + to AI

 

- Kurt Lewin's Equation is commonly understood as saying $$B=P+E$$. This is [[lazy behavioral science]].

   edit   deselect   + to AI

 

- ## ^^((25CXpnV79))^^

   edit   deselect   + to AI

 

- Behavior is a function of the interplay between [[person-side factor]]s and the contextual factors of their environment

   edit   deselect   + to AI

 

- [[[[metroidvania]]s gate user [[progression]] with soft progression gates]] and with [hard progression gates]([[[[metroidvania]]s gate user [[progression system]] with [[hard progression gate]]s]]). In a nonlinear course, [we could model this progression system](https://twitter.com/RobertHaisfield/status/1265467836249473033?s=20) by asking users to find some password that is unlocked by answering a series of questions that can be found through exploration of new and past learning material. These passwords unlock new areas of the course.

   edit   deselect   + to AI

 

- Who the person is impacts how they respond to environmental factors and what environments they place themselves within.

   edit   deselect   + to AI

 

- "[[backtracking]] shows you old material in a way that feels fresh"

   edit   deselect   + to AI

 

- Environmental factors over time can shape who the person is, and the pressures of the situation can cause people to act in a way that is counter to who they are.

   edit   deselect   + to AI

 

- Atomic elements of thought gain new meaning based on the context in which it is presented. If it's encountered through a different path than before, then it gains new meaning[++](((5Hi-1xU4J)))

   edit   deselect   + to AI

 

- We want the learner to uncover new meaning

   edit   deselect   + to AI

 

- How does this build on [[Mark Robertson]]'s thoughts: "n terms of the idea of a curated path, whether this is online or in-person, curation requires contextualization. Your suggestion towards the thread by [@lalizlabeth](https://twitter.com/lalizlabeth) is appropriate. Roam offers the back-links to track and front-load the applicable contexts."

   edit   deselect   + to AI

 

- "You gain new abilities that open up new paths over where you've previously explored"

   edit   deselect   + to AI

 

- Certain pieces of information in the course could be gated behind required lessons learned

   edit   deselect   + to AI

 

- "[[user skill]] based gates make it so the user can make it further if they have a high enough [[skill level]]"

   edit   deselect   + to AI

 

- "Requires the user to be able to enter a [[failure state]] and retry and succeed over what was previously a challenge. This makes surpassing the gate that previously made them turn away all the more satisfying."

   edit   deselect   + to AI

 

- This could come in the form of an assessment where the user needs to demonstrate a certain level of understanding in order to access certain content, but is able to retry after gathering that understanding.

   edit   deselect   + to AI

 

- {{[[TODO]]}} #[[open question]] Should the player be told explicitly where to find that information, or should the player be given the freedom to explore around and find that information for themselves?

   edit   deselect   + to AI

 

- {{[[TODO]]}} #[[open question]] should the player be told explicitly what they unlock in order to make paths for exploration more explicit, or would it encourage more exploration, backtracking, and recall to have the user find what they just unlocked on their own? #[[[[backtracking]] is a form of [[Spaced Repetition]]]]

   edit   deselect   + to AI

 

- {{embed: ((N2dvnnIEX))}}

   edit   deselect   + to AI

 

- "An important feature of metroidvanias is that they **invite the user to notice clear cues for when to apply what they have just learned, which makes the learning stick.**" So how can we teach skills and make clear cues within the instructional design in order to increase the likelihood that they notice the opportunities they have available to apply those learnings to a situation?

   edit   deselect   + to AI

 

- {{[[TODO]]}} Relevant to this discussion is the discussion about player agency in [[open-world]] games more broadly, but also Metroid Zero Mission vs. Super Metroid. There were videos on youtube that analyzed every single metroid game. **DIVE IN**

   edit   deselect   + to AI

 

- "At this point, most users will bookmark the location of the exploration point in their head, and come back to it when they've found what's necessary to make it further."

   edit   deselect   + to AI

 

- ""Key" based gates require the user to acquire some item or ability from elsewhere on the map."

   edit   deselect   + to AI

 

- The user could be told where they need to acquire some item in the course, or even be sent on a [[scavenger hunt]] to get there.

   edit   deselect   + to AI

 

- "Hidden goodies"

   edit   deselect   + to AI

 

- "These mini-lectures would be linked to relevant mini-lectures so people could see both suggested prerequisites and suggested follow-ups, allowing the learner to explore their curiosity."

   edit   deselect   + to AI

 

- ### There should be [[aha moment]]s sprinkled throughout the course

   edit   deselect   + to AI

 

- ^^Create a [[positive feedback loop]]^^

   edit   deselect   + to AI

 

- The nonlinear course could let you know what atomic lessons you unlock whenever you complete something

   edit   deselect   + to AI

 

- This could be visually represented in a [[skill tree]] / [[skill web]]

   edit   deselect   + to AI

 

- "Curation could perhaps look something like skill trees where skills (mini-lessons) can exist in multiple locations at once. This simultaneously shows users recommended paths and gives the user progress markers towards completion. https://twitter.com/RobertHaisfield/status/1261312251924897793?s=20"

   edit   deselect   + to AI

 

-

   edit   deselect   + to AI

 

- {{[[TODO]]}} [[social comparison]] to people above you in skill can be motivating, but only if that difference feels achievable.

   edit   deselect   + to AI

 

- Tactics for hard progression gates

   edit   deselect   + to AI

 

- A soft progression gate is when a player isn't strictly gated from progressing further, but the gate decreases the likelihood that the user makes it further.

   edit   deselect   + to AI

 

- On a leaderboard, it often feels impossible to close the gap between yourself and those at the top.

   edit   deselect   + to AI

 

- "Key" based gates require the user to acquire some item or ability from elsewhere on the map.

   edit   deselect   + to AI

 

- Tactics for soft progression gates:

   edit   deselect   + to AI

 

- The leaderboard is then only motivating for people who are already near the top.

   edit   deselect   + to AI

 

- [[user skill]] based gates make it so the user can make it further if they have a high enough [[skill level]]

   edit   deselect   + to AI

 

- According to [[The competition–performance relation: A meta-analytic review and test of the opposing processes model of competition and performance]], a meta-analysis looking at 474 studies analyzing the impact of competition on performance ended up finding 0 effect overall.

   edit   deselect   + to AI

 

- At this point, most users will bookmark the location of the exploration point in their head, and come back to it when they've found what's necessary to make it further.

   edit   deselect   + to AI

 

- These two competing effects balance each other out:

   edit   deselect   + to AI

 

- Requires the user to be able to enter a [[failure state]] and retry and succeed over what was previously a challenge. This makes surpassing the gate that previously made them turn away all the more satisfying.

   edit   deselect   + to AI

 

- Competition only leads to an increase in performance when people have [[approach goal]]s. These are slightly more common in a competitive setting, and even then the effect size is modest.

   edit   deselect   + to AI

 

- "Mastery loop [[screenshot]] ![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRobAndHisNotes%2FJ9Xgls1EYn.png?alt=media&token=32e9627d-b720-4e0d-a278-97d0e6c8caec)"

   edit   deselect   + to AI

 

- Competition is counterproductive when people have [[avoidance goal]]s. These are slightly less common in a competitive setting, but the effect size is significant.

   edit   deselect   + to AI

 

- ^^((EEf6MK9S8))^^

   edit   deselect   + to AI

 

- Can you imagine sending someone a notification telling them to get back on the app so they don't lose their spot?

   edit   deselect   + to AI

 

- The opportunity to re-engage with something that previously was challenging, perhaps to the point of you failing, gives the user a sense of [[accomplishment]]

   edit   deselect   + to AI

 

- {{[[TODO]]}} Leaderboards can provide a social target for a goal

   edit   deselect   + to AI

 

-

   edit   deselect   + to AI

 

- ### [[example]] In Pokemon, players __want to be the very best, like no one ever was.__ In order to do that, they need strong pokemon.

   edit   deselect   + to AI

 

- People often want to do better than they've done before. They may want to hit a personal record while working out, do better than they did on a previous exam, or weigh less than they did last week. The player is in competition with themselves.

   edit   deselect   + to AI

 

- In the image below: The progress bar shows them how close the Blaziken is to level 52. In order to get to the **next level, they need a certain amount of XP**

   edit   deselect   + to AI

 

- A [[personal leaderboard]] is one way of helping the user to outperform their past self.

   edit   deselect   + to AI

 

- They use color to signal meaning to the player. **Blue** = experience points they've earned **so far**. **Gray** = **what’s left before the next level**

   edit   deselect   + to AI

 

- [[example]]: in Lumosity, players are able to see how they performed in each game as compared to their previous best scores.

   edit   deselect   + to AI

 

- "Feedback shows the user how their current actions relate to their goals. This can help the user notice their own progression and learn the rules of the game."

   edit   deselect   + to AI

 

- ![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2FoyivjuiNcg?alt=media&token=151911c2-0016-4d00-a82e-f6a8d09441ad)

   edit   deselect   + to AI

 

- When the player beats a pokemon that's higher level than their own, they earn a lot of experience points. When they beat lower level pokemon, they earn less. This consistent rule teaches the user how to best achieve their goals.

   edit   deselect   + to AI

 

- [[example]]: in Loop Habit Tracker, they show the user a personal leaderboard of their previous streaks. By having a personal leaderboard, the users previous efforts do not feel as though they are wasted. This is a clever way of addressing the issue that "Streak counters are only really motivating while you have a streak going. In the early stages of a streak, it is not valuable so people don't mind losing them. Upon losing a long streak, [[loss aversion]] leads to a significant feeling of distress. This can demoralize the user, especially if it will take weeks, months, or longer to recover the streak.[++](((g9lr8L82X)))"

   edit   deselect   + to AI

 

- [[screenshot]] of pokemon battle ![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2F7VQpeiO7Eu?alt=media&token=8e544ff1-c59a-4329-a336-b8d861242b0f)

   edit   deselect   + to AI

 

- The personal leaderboard also shifts the goal of the user- instead of simply trying to avoid losing their streak, the user is also trying to outperform their past self. With the [[failure state]] of losing their streak, they then may set the goal to do better than last time.

   edit   deselect   + to AI

 

- ### [[example]] In [[Duolingo]], the user wants to learn a language. In order to do that, they need to practice consistently

   edit   deselect   + to AI

 

- [[screenshot]] of Loop Habit Tracker![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2FMk5IE6iC0w?alt=media&token=4ef85193-7c76-49f0-828a-28617f1a98a7)

   edit   deselect   + to AI

 

- The progress bar and immediate feedback upon completing the activity makes the rules clear- "I just made half of my progress towards my goal by doing this activity. Great. I need to do a second to complete it. I need to do two activities per day in order to reach my goal."[++](((qOZnVX-NU))) **This is all implicitly communicated through the design.**

   edit   deselect   + to AI

 

- [[screenshot]] of [[Duolingo]] progress bar

   edit   deselect   + to AI

 

- ![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2FIskwHqXLZQ?alt=media&token=806138f4-8a63-4f37-912e-f7aa8ee1a55c)

   edit   deselect   + to AI

 

- Related thought: [[[[puzzle game]]s, problem solving games, templates, and instruction manuals all teach different levels of understanding]]

   edit   deselect   + to AI

 

- definition:: Progress monitoring is showing a person the discrepancy between their present state and their goal, while giving them feedback how their present actions relate to their goal.

   edit   deselect   + to AI

 

- Puzzle games, problem solving games, and instruction manuals all have the user at Point A and trying to get to Point B. In the games, they have to teach you how the game works in order to have you figure out the solution yourself, and they attempt to make it challenging so you have a sense of discovery. In an instruction manual, you need to learn just enough to build it but you don't need to know how the building blocks work. Instruction manuals just wants discoverability, because they want the user to be guaranteed to find the solution.

   edit   deselect   + to AI

 

- A [[puzzle game]] takes you from Point A to Point B by asking you to figure it out, but they make your end goal clear, they focus your attention on a subgoal (the difficult "catch" that keeps the puzzle from being totally straightforward), and they give you contextual cues to help you figure it out.

   edit   deselect   + to AI

 

- To do it well, progress needs to be related to a goal that they already have or are bought into. At that point, feedback on how their actions relate to their goal is perceived as helpful information.

   edit   deselect   + to AI

 

- Defining terms:

   edit   deselect   + to AI

 

- They are built on mechanics and rules, __things you control and how you interact with the environment__. These serve as your building blocks. They give you puzzles where you need to understand how something works in order to solve them.

   edit   deselect   + to AI

 

- # The goal state can be described in three main ways

   edit   deselect   + to AI

 

- A __template__ takes you from Point A to Point B of a problem by giving you the solution.

   edit   deselect   + to AI

 

- Then __puzzle games__ build on this understanding and help the user develop mastery of the mechanics over time by giving you new puzzles that:

   edit   deselect   + to AI

 

- The user likely gains very little understanding from this, but they have a new tool and process that they can copy and paste that lets them punch above their weight in [[skill level]]

   edit   deselect   + to AI

 

- ## Target: You want to get somewhere further along than you are now.

   edit   deselect   + to AI

 

- show you new situation where a known rule applies

   edit   deselect   + to AI

 

- An __instruction manual__ takes you from Point A to Point B in one specific way that is a spelled out process for how to do it that the user needs to execute.

   edit   deselect   + to AI

 

- Imagine you're running a race. The finish line is your target.

   edit   deselect   + to AI

 

- combine multiple known rules together in a solution

   edit   deselect   + to AI

 

- The user gains more understanding of the building blocks from this than with templates. However, because instruction manuals are only able to give you solutions to narrowly defined problems, the user may have some difficulty generalizing what they learn to other situations.

   edit   deselect   + to AI

 

- ## [[[[progress bar]]s can visually enable [[progress monitoring]]]]

   edit   deselect   + to AI

 

- teach you a new rule

   edit   deselect   + to AI

 

- __Instruction manuals__ are a safe way to introduce a new mechanic to users where the user has a low risk of messing up and experiencing a [[failure state]].

   edit   deselect   + to AI

 

- {{embed: ((cl8kcEwHM))}}

   edit   deselect   + to AI

 

- ### Usually, the learning curve of a puzzle game goes like this.

   edit   deselect   + to AI

 

- {{embed: ((X0gKoLoMO))}}

   edit   deselect   + to AI

 

- A [[puzzle game]] takes you from Point A to Point B by asking you to figure it out, but they make your end goal clear, they focus your attention on a subgoal (the difficult "catch" that keeps the puzzle from being totally straightforward), and they give you contextual cues to help you figure it out.

   edit   deselect   + to AI

 

- You are introduced to the basic concepts of the game, the fundamental rules of the game that dictate how you control your character and interact with your environment, through very simple puzzles and tutorials that are meant to give you a grasp of those building block rules.

   edit   deselect   + to AI

 

- ## [[Scoring can provide feedback on how performance relates to internally held goals]]

   edit   deselect   + to AI

 

- A __puzzle__ is a narrowly defined problem, but because the user is generating the solution themselves[++](((-KplflDJc))), they gain a deeper understanding of how to solve the problem again in the future.[++](((EzdSWoRm2)))

   edit   deselect   + to AI

 

- Those rules are taught one at a time, and then you start to see them together.

   edit   deselect   + to AI

 

- {{embed: ((Gh7IGr-Kc))}}

   edit   deselect   + to AI

 

- "Then __puzzle games__ build on this understanding and help the user develop mastery of the mechanics over time by giving you new puzzles that: "

   edit   deselect   + to AI

 

- They are taught through problem solving, which leverages the [[generation effect]] so people are more likely to remember the solution than if they just read the solution in an [[instruction manual]].

   edit   deselect   + to AI

 

- ## [[Skill trees can help the user to both set goals and chart their course]]

   edit   deselect   + to AI

 

- "show you new situation where a known rule applies"

   edit   deselect   + to AI

 

- They give you a starting point (point A) and an ending point (Point B) and they ask you to figure it out. Usually, they limit the options that you have available to you in some way so that you're more likely to come up with the answer they desire.

   edit   deselect   + to AI

 

- "combine multiple known rules together in a solution"

   edit   deselect   + to AI

 

- {{embed: ((BqiAP3yeE))}}

   edit   deselect   + to AI

 

- So you learn the rules of the game through problem solving, which allows you to more effectively deal with future problems.

   edit   deselect   + to AI

 

- "teach you a new rule"

   edit   deselect   + to AI

 

- ## [[[[Past performance]]- Players may want to perform better than their past self]]

   edit   deselect   + to AI

 

- Once they've taught you the basic concepts, they start to put them together. Now you have to solve a problem using all of the building block rules you've learned at once!

   edit   deselect   + to AI

 

- For more detail on how the learning curve generally works, [see here](((ny2Gp7uTy)))

   edit   deselect   + to AI

 

- "People often want to do better than they've done before. They may want to hit a personal record while working out, do better than they did on a previous exam, or weigh less than they did last week. The player is in competition with themselves."

   edit   deselect   + to AI

 

- Then they start to move on to more and more complex puzzles that require a deeper and deeper understanding of the rules of the game.

   edit   deselect   + to AI

 

- Closely related are __problem solving games__ which similarly accrete user understanding over time, but in these games the problems are more broadly defined and the amount of possible solutions are broad.

   edit   deselect   + to AI

 

- {{embed: ((D3cua9w25))}}

   edit   deselect   + to AI

 

- Often, you'll come upon a solution to a puzzle where you're like, "Oh, I didn't know I could do that!" Then the game starts giving you puzzles where that general concept is the solution, so you start to recognize the situations where you can use it.

   edit   deselect   + to AI

 

- Of __templates, instruction manuals, puzzle games, and problem solving games,__ these gain the deepest level of understanding of the core mechanics, but are also the most challenging.

   edit   deselect   + to AI

 

- ## [[[[social comparison]]- People may want to perform better than their peers]]

   edit   deselect   + to AI

 

- This all comes with a powerful and thrilling [[sense of discovery]] and accomplishment when you figure out things that made you struggle a bit.

   edit   deselect   + to AI

 

- {{embed: ((LO2o0Gmpg))}}

   edit   deselect   + to AI

 

- An [[example]] of the puzzle game learning curve being cumulative:

   edit   deselect   + to AI

 

-

   edit   deselect   + to AI

 

- There was a really interesting case where I was playing a puzzle game called Pode with my girlfriend, and it gave us a puzzle where one of our characters seemed to need to stand on a pedal in order to open up the door to the next puzzle. However, only one of our characters was heavy enough to stand on it, and we both needed to make it through to the other side, so the solution couldn't be "just stand on it." Eventually we found a weighted block to put on top of the square pedal to weigh it down so we could both move on.

   edit   deselect   + to AI

 

- What was interesting was that in the next puzzle, they had that same pedal, and she immediately said, "oh, that means we've got to find a cube to put on the pedal."

   edit   deselect   + to AI

 

- "Incomplete page"

   edit   deselect   + to AI

 

- Video: {{youtube: https://www.youtube.com/watch?v=LJZBGJOzhUY}}

   edit   deselect   + to AI

 

- # Flashcards in a Roguelike structure

   edit   deselect   + to AI

 

- # How to use queries and what they let you do

   edit   deselect   + to AI

 

- Flashcards are generally pretty boring on their own

   edit   deselect   + to AI

 

- What are your tools:

   edit   deselect   + to AI

 

- To many, they feel like a chore that must be completed. "I said that I would review this many flash cards per day so I guess I have to now." [[Apps focus on habits, which encourage minimum behavior rather than as much as possible]]

   edit   deselect   + to AI

 

- {and: [[page 1]] [[page 2]]}

   edit   deselect   + to AI

 

- This is a common problem with gamification. By telling people that there's a minimum amount to do each day, people feel obligated. Games generally attempt to make the player want to play as much as possible

   edit   deselect   + to AI

 

- {or: [[page 1]] [[page 2]]}

   edit   deselect   + to AI

 

- "Every time you die, you start over at the beginning of the game." "During each life, you engage in a "run," where you attempt to make it further than you made it last time."

   edit   deselect   + to AI

 

- {not: [[page 1]]}

   edit   deselect   + to AI

 

- Imagine being shown a deck of flash cards. You want to make it through as many as you can before you fail.

   edit   deselect   + to AI

 

- You can embed these operations within each other.

   edit   deselect   + to AI

 

- {{[[TODO]]}} How might we define failure here?

   edit   deselect   + to AI

 

- 3(15/(2+3))=9

   edit   deselect   + to AI

 

- Perhaps if you miss three cards in a row, then you have to start over your run.

   edit   deselect   + to AI

 

- {and: [[page 1]] [[[page 2]] {not: [[page 3]]}}}

   edit   deselect   + to AI

 

- "This makes it so people are able to experience growing mastery."

   edit   deselect   + to AI

 

- What do they let you do?

   edit   deselect   + to AI

 

- "^^[[[[progress bar]]s can visually enable [[progress monitoring]]]]. The "progress bar" is how far you have made it into a run before you hit your [[failure state]].^^" The player will be able to tell how far they have made it by how many flashcards they have been shown before their [[failure state]]

   edit   deselect   + to AI

 

- Ask your database: What have I written about this?

   edit   deselect   + to AI

 

- "[[[[Past performance]]- Players may want to perform better than their past self]]"

   edit   deselect   + to AI

 

- Pull in insights from many areas

   edit   deselect   + to AI

 

- How do we make it so "Restarting after death is easy"?

   edit   deselect   + to AI

 

- Let's say I want to prepare for a talk about gamification. I want to pull in my insights on gamification related to [[difficulty matching]], [[failure states]]s and [[user goal]]s, but those thoughts don’t need to be combined into a block. So I’ll write a query for that:

   edit   deselect   + to AI

 

- "In Dead Cells, when you die, it immediately starts you over at the beginning without asking you if you'd like to try again."

   edit   deselect   + to AI

 

- ```clojure

   edit   deselect   + to AI

 

- Maybe we after failing, we immediately start the user over at the beginning, and we show their previous score in a corner of the screen. "The user has the exact same choice as before- they can start the next run or they can quit. However, instead of asking the player to deliberate between those two options, it makes the default option to start again. The player has already started so quitting is turned into an effortful decision. [[choice architecture]]"

   edit   deselect   + to AI

 

{{[[query]]: {and: {or: [[difficulty matching]] [[failure state]] [[user goal]]} {not: [[query]]}}}}```

   edit   deselect   + to AI

 

- {{[[query]]: {and: {or: [[difficulty matching]] [[failure state]] [[user goal]]} {not: [[query]]}}}}

   edit   deselect   + to AI

 

- Alternate option would be to show the player their stats first and then ask them if they'd like to continue. If we do this, then the user will have to make an active choice whether to start reviewing a new shuffled deck of cards.

   edit   deselect   + to AI

 

- Let’s say that I have two words to describe something. [[Perceptual Control Theory]] is the same as [[pct]]. I could put, on the page for Perceptual Control Theory, a query for all notes that are either or the other.

   edit   deselect   + to AI

 

- How do we give the player a boost in their next life?[++](((bPiJLjHUX)))

   edit   deselect   + to AI

 

- ```{{[[query]]: {and: {or: [[perceptual control theory]] [[pct]]}{not: [[query]]}}}}```

   edit   deselect   + to AI

 

- Give the user an extra life if they made it further than they made it in a previous life.

   edit   deselect   + to AI

 

- {{[[query]]: {and: {or: [[perceptual control theory]] [[pct]]}{not: [[query]]}}}}

   edit   deselect   + to AI

 

- ""[[example]] Ziggurat is a [[roguelike]] game.""

   edit   deselect   + to AI

 

- Or let’s say you want to do the above, but also filter in my mentions of a specific person, [[Warren Mansell]]

   edit   deselect   + to AI

 

- ""During each run, the player attempts to make it further than they did before. The further they make it, the better power ups they earn. However, the player isn't able to use the power ups until their next run, which means that they need to fail first and they may actually look forward to failure.""

   edit   deselect   + to AI

 

- ```clojure

   edit   deselect   + to AI

 

- We could give the student powerups like being able to see the first word of the back of a card (**hint**), gain an extra life, etc.

   edit   deselect   + to AI

 

{{[[query]]: {and: {or: [[pct]] [[perceptual control theory]]} [[Warren Mansell]] {not: [[query]]}}}}```

   edit   deselect   + to AI

 

- ""[[screenshot]] of Ziggurat ![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2Ff91VECu9ky?alt=media&token=77778221-7291-46a0-b85f-511102c3593d)""

   edit   deselect   + to AI

 

- {{[[query]]: {and: {or: [[pct]] [[perceptual control theory]]} [[Warren Mansell]] {not: [[query]]}}}}

   edit   deselect   + to AI

 

- "Each run is similar but with randomized elements so you have a unique experience each time."

   edit   deselect   + to AI

 

- When you want a [[saved view]]. When you look through linked references for something, next time you change it, it’s gone. Here I want to replicate this set of linked references to [[pct]]

   edit   deselect   + to AI

 

- "Whereas games that are the same each time may lead to memorization of a sequence of actions, the random elements of a roguelike lead the player to mastery over the mechanics."

   edit   deselect   + to AI

 

- ```{{[[query]]: {and: [[pct]] [[Warren Mansell]] {not: [[query]]}}}}```

   edit   deselect   + to AI

 

- A worry with normal flashcard decks is that if you receive the same questions in the same order every time, you may just memorize the order of elements, or that you may just memorize the structural elements of a card rather than the actual meaning of it.

   edit   deselect   + to AI

 

- {{[[query]]: {and: [[pct]] [[Warren Mansell]] {not: [[query]]}}}}

   edit   deselect   + to AI

 

- Giving the player a random order of cards every time would provide some variation.

   edit   deselect   + to AI

 

- Group A is ([[reward prediction error]] OR [[rpe]], OR [[Wolfram Schultz]]), Group B is (pct OR perceptual control theory OR Warren Mansell). I want to see the times that I’ve mentioned any of the terms from Group A in conjunction with any of the terms in Group B.

   edit   deselect   + to AI

 

- Giving the player slightly different wordings for the same flash cards may remove the ability to simply master structural elements of it.

   edit   deselect   + to AI

 

- ```{{[[query]]: {and: {or: [[pct]] [[perceptual control theory]][[Warren Mansell]]}{or: [[Reward Prediction Error]][[RPE]][[Wolfram Schultz]]}{not: [[query]]}}}}```

   edit   deselect   + to AI

 

- "Each run is similar but with randomized elements so you have a unique experience each time." This makes flashcards less boring, because you're doing it in a different order each time.

   edit   deselect   + to AI

 

- {{[[query]]: {and: {or: [[pct]] [[perceptual control theory]][[Warren Mansell]]}{or: [[Reward Prediction Error]][[RPE]][[Wolfram Schultz]]}{not: [[query]]}}}}

   edit   deselect   + to AI

 

- [[Games intentionally design for [[failure state]] recovery]]

   edit   deselect   + to AI

 

- Show yourself what has been processed and what hasn’t yet been processed.

   edit   deselect   + to AI

 

- [Using queries to mark your progress processing]([[Creating a content pipeline and tracking your ideas in [[Roam]] with [[queries]]]])

   edit   deselect   + to AI

 

- In the [[Opus Magnum]] [[example]] below, you'll notice something important- by showing the user the histogram comparing their performance to every other player, that helps the user to set goals for themselves. **Players don't know what good and bad performance is before seeing the distribution of every other player.**

   edit   deselect   + to AI

 

- When there is strong support coming from the community, the question shifts from "what am I capable of?" to "What am I capable of with the help of the community?"

   edit   deselect   + to AI

 

- [[features rarely make sense to new users without the context of their goals]]. However, there are [[[[individual difference]]s between people in prior [[skill level]]]]. As [[the [[skill level]] of each user grows over time]], so does their vocabulary. With the new vocabulary given my the app, they are able to conceptualize and express desires that they couldn't express before. Because of this, user goals change over time.

   edit   deselect   + to AI

 

- "[[example]]: [[Opus Magnum]] is a [[problem solving game]] where each problem you get has a million different solutions. After completing a problem, users are compared to the rest of the world based on three histograms, which represent three different scores placed into the context of [[social comparison]]. Further left means doing better than more people. The player has the opportunity to redo the problem and attempt to find a new solution if they are dissatisfied with any of their scores."

   edit   deselect   + to AI

 

- I have not yet looked into whether this is an existing concept in behavioral science, but it's a combination of a few common ideas:

   edit   deselect   + to AI

 

- User goals also change over time because life circumstances change.

   edit   deselect   + to AI

 

- ^^This social goal is incredibly salient, as demonstrated by:^^ "Aside- I redid the first level for creating water for an hour after I had already beaten it because I wanted to get cycles down."

   edit   deselect   + to AI

 

- [[self-efficacy]] relates to a person's sense of self-identity

   edit   deselect   + to AI

 

- This means that the behaviors at the beginning of the user journey are more likely indicative of traits of users who are already likely to be successful.

   edit   deselect   + to AI

 

- "[[screenshot]] of Opus Magnum Histograms ![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2FWAoP-VN4lC?alt=media&token=07ace2c3-9465-4023-be0b-1f6e6490bc55)"

   edit   deselect   + to AI

 

- Self-identity is influenced by others (most things are)

   edit   deselect   + to AI

 

- ^^I claim that a necessary requirement of [[retention]] is that the app is able to adapt to changing user goals over time.^^

   edit   deselect   + to AI

 

- Others can provide instruction and support

   edit   deselect   + to AI

 

- This means that we need to track how behaviors over time relate to retention, not just at the beginning.

   edit   deselect   + to AI

 

- Depending on the complexity of the app and different use cases, there may be different points in time when the user reaches [[user involvement]] "maturity"

   edit   deselect   + to AI

 

- [[onboarding]]

   edit   deselect   + to AI

 

- ![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2FdKATvcslLD.jpeg?alt=media&token=d8cf4e73-22ef-44ff-be09-cbe2213dc69a)

   edit   deselect   + to AI

 

- [[[[continuous onboarding]] can come from [[community]]]]

   edit   deselect   + to AI

 

- I'm [[Rob Haisfield]], CEO of [[behavior design]] and [[gamification]] consultancy [[Influence Insights]] and behavioral product strategist at a [[startup studio]] called [[Spark Wave]], where I do the same thing as I do with my consulting except for portfolio companies. Generally speaking, this means that I read a lot of behavioral science papers, I play a lot of video games[++]([[some of my favorite video games]]), and I think about how the principles of each apply to the way that apps function. This is all with the aim of enabling users to better accomplish their goals.

   edit   deselect   + to AI

 

- [[New users of an app should start with a project to have a succesful [[onboarding]]]]

   edit   deselect   + to AI

 

- [[user involvement]] is more important than engagement is for growth and is fairly central towards my approach to enabling product design

   edit   deselect   + to AI

 

- [[[[puzzle game]]s for [[power user]] [[onboarding]]]]

   edit   deselect   + to AI

 

- "User involvement leads to"

   edit   deselect   + to AI

 

- [[failure state]]

   edit   deselect   + to AI

 

- "[[retention]]"

   edit   deselect   + to AI

 

- "[[virality]]"

   edit   deselect   + to AI

 

- "[[adoption]]"

   edit   deselect   + to AI

 

- I got into this world out of a deep passion and curiosity for learning about why people do what they do and a drive to make an impact. Behavior design is a new field, has incredible potential, and still has much to discover. Gamification has been around for a while, but it has tremendous room for improvement because it's been misapplied and copied in its misapplication since its inception.

   edit   deselect   + to AI

 

- [[Why I chose to consult rather than get a PhD]]

   edit   deselect   + to AI

 

- [[Most [[gamification]] is pretty bad]]

   edit   deselect   + to AI

 

- When making great gamification, it's important to consider

   edit   deselect   + to AI

 

- [[[[Expectancy Value Theory]] and its role in [[gamification]]]]

   edit   deselect   + to AI

 

- [[failure state]]s

   edit   deselect   + to AI

 

- [[difficulty matching]]

   edit   deselect   + to AI

 

- [[There could be many genres of [[gamification]]]]

   edit   deselect   + to AI

 

- My general mindset is that there is no one theory that explains everything. The richness of behavioral science and game design is that both have explored a huge amount of different subjects and that the relationships between those subjects are deep. Habits aren't everything. Heuristics and biases aren't everything. Points, badges, and leaderboards aren't everything. I [[follow curiosity unconditionally]], and trust that if I keep track of what I learn well, there's a good chance it will be useful at some point.

   edit   deselect   + to AI

 

- [[There is no curriculum of everything you should know]]

   edit   deselect   + to AI

 

- I hope to combine my expertise in behavioral science and gamification to help users improve their lives through products. Your users hire you to help them achieve some goal, but loving your product is a result of your teamwork with the user. [[The role of the user and the role of the app]] means both players need to play.

   edit   deselect   + to AI

 

- # [[Contact Me]]

   edit   deselect   + to AI

 

- Email: rob@influenceinsights.io

   edit   deselect   + to AI

 

- Twitter: [@RobertHaisfield](https://twitter.com/RobertHaisfield)

   edit   deselect   + to AI

 

- LinkedIn: https://www.linkedin.com/in/robhaisfield/

   edit   deselect   + to AI

 

- My Website: https://www.influenceinsights.io/

   edit   deselect   + to AI

 

- ## Some nice things people have said about me:

   edit   deselect   + to AI

 

- [[Adam Taylor]]

   edit   deselect   + to AI

 

- Cofounder and COO of Fabriq

   edit   deselect   + to AI

 

- "These days, it's all too easy to think you’re implementing effective behavioral design principals after reading a few articles, or a one-off conversation with someone who’s "done it before”, but Rob provides the kind of expertise that product design teams are unlikely to possess in-house, even if they are well schooled in UI/UX/CX. Rob’s contribution to our "goal & reward” system redesign was not only a crucial correction to our process and the direction we took the work, but it was an educational opportunity for our team(s) that gave us additional lenses through which to view our work that will pay dividends down the line.”

   edit   deselect   + to AI

 

- [[Spencer Greenberg]]

   edit   deselect   + to AI

 

- Founder of [[Spark Wave]]

   edit   deselect   + to AI

 

- Mathematician, Behavioral Scientist, Entrepreneur

   edit   deselect   + to AI

 

- "Rob has remarkable creativity, and an impressive fluency with applying behavior science principles in a practical way to our products. He has demonstrated his ability to solve tricky user behavior challenges time and again.”

   edit   deselect   + to AI

 

- [[Eddie Liu]]

   edit   deselect   + to AI

 

- Founder of UpLift

   edit   deselect   + to AI

 

- "Rob has brought a valuable behavioral science perspective to our company. He dives deep into the issues at hand and has been crucial in helping us change user behavior in a logical and effective manner."

   edit   deselect   + to AI

 

- [[Patrick Olsen]]

   edit   deselect   + to AI

 

- Optimization consultant specializing in strategic optimization, driving teams to increase conversions and hands on CRO

   edit   deselect   + to AI

 

- "Rob is an amazing behavioral scientist. He simply saw things about our product that we didn't. His communication skills were excellent, and you could truly feel his passion for delivering the best results possible. No doubt I will hire Rob again for future projects. I give him my highest recommendations."

   edit   deselect   + to AI

 

- {{[[TODO]]}} [[stub]]

   edit   deselect   + to AI

 

- {{[[TODO]]}}

   edit   deselect   + to AI

 

- {{[[TODO]]}} [[stub]]

   edit   deselect   + to AI

 

- [[[[Kurt Lewin]]'s Equation]] states that "Behavior is a function of the interplay between [[person-side factor]]s and the contextual factors of their environment".

   edit   deselect   + to AI

 

- Duolingo you have it asking you to maintain a streak. [[streak counter]]

   edit   deselect   + to AI

 

- [[aha moment]]s don't just "happen" to the users. The user has some agency in making those moments happen

   edit   deselect   + to AI

 

- The designer has partial to complete control over the information that is presented to the user, how that information is framed, what choices are given to users, and what information people are paying attention to. As long as the user is paying attention to the app, the designer exerts influence over the user's behavior.

   edit   deselect   + to AI

 

- "It's motivating while you have it, but when you lose it, people may be demotivated and fail to start again. All they had worked for was for nothing, and it may take them weeks to earn it back, making starting over feel tedious. This is a crappy [[failure state]]"

   edit   deselect   + to AI

 

- [[If a new user starts with a project, they are more likely to experience [[aha moment]]s]]

   edit   deselect   + to AI

 

- "["Yesterday I fell asleep before I finished the session or I left the app open or something, and this morning the counter had zipped back to 1. I was unreasonably upset about it. I felt like the previous 380-something days had been for nothing. I was useless.”](https://indifferentignorance.com/2016/12/06/introducing-the-whitest-white-girl-problem-ive-ever-had-ft-headspace/)"

   edit   deselect   + to AI

 

- "Having a project to start out sets the user on the hunt for information", and that information is going to be relevant to their goals

   edit   deselect   + to AI

 

- "Streak counters encourage a minimum amount of behavior"

   edit   deselect   + to AI

 

- When the **user has a project,** they are more likely to **do behaviors that are relevant to their goals, making those behaviors more meaningful.** Compare that to **the alternative,** when they are **using the app on an abstract level,** attempting to understand it only through the features, **then they will do meaningless behaviors.**

   edit   deselect   + to AI

 

- You're only doing 10 minutes per day, that's all it's asking you to do. This makes it feel as though it might be a chore.

   edit   deselect   + to AI

 

- [[example]] On Twitter, you need to follow interesting people in order to experience the benefit of seeing something interesting

   edit   deselect   + to AI

 

- Games don't ask the player to play some minimum amount per day. Instead, they make a fun game that people want to play as much as possible.

   edit   deselect   + to AI

 

- Apps should instead be trying to encourage the user to do as much as possible because they enjoy what they're doing

   edit   deselect   + to AI

 

- So how would Duolingo encourage users to go through as many lessons as possible?

   edit   deselect   + to AI

 

- Could Duolingo set up their incentive system and an alternative to the streak counter that instead asks: "Do this much or more?"

   edit   deselect   + to AI

 

- In a [[roguelike]], "During each life, you engage in a "run," where you attempt to make it further than you made it last time."

   edit   deselect   + to AI

 

- "This is important because it creates a [[learning goal]]"

   edit   deselect   + to AI

 

- [[[[Past performance]]- Players may want to perform better than their past self]]

   edit   deselect   + to AI

 

- [[example]] "A [[personal leaderboard]] is one way of helping the user to outperform their past self."

   edit   deselect   + to AI

 

- "[[example]]: in Lumosity, players are able to see how they performed in each game as compared to their previous best scores."

   edit   deselect   + to AI

 

- "![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2FoyivjuiNcg?alt=media&token=151911c2-0016-4d00-a82e-f6a8d09441ad)"

   edit   deselect   + to AI

 

- "[[example]]: in Loop Habit Tracker, they show the user a personal leaderboard of their previous streaks. By having a personal leaderboard, the users previous efforts do not feel as though they are wasted. This is a clever way of addressing the issue that "Streak counters are only really motivating while you have a streak going. In the early stages of a streak, it is not valuable so people don't mind losing them. Upon losing a long streak, [[loss aversion]] leads to a significant feeling of distress. This can demoralize the user, especially if it will take weeks, months, or longer to recover the streak.[++](((g9lr8L82X)))""

   edit   deselect   + to AI

 

- "The personal leaderboard also shifts the goal of the user- instead of simply trying to avoid losing their streak, the user is also trying to outperform their past self. With the [[failure state]] of losing their streak, they then may set the goal to do better than last time."

   edit   deselect   + to AI

 

- ^^This is a meta approach at improving the failure state from habit trackers, which still encourage a minimum amount of behavior.^^

   edit   deselect   + to AI

 

- "[[screenshot]] of Loop Habit Tracker![](https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2FRob-Haisfield-Thinking-in-Public%2FMk5IE6iC0w?alt=media&token=4ef85193-7c76-49f0-828a-28617f1a98a7)"

   edit   deselect   + to AI

 

- Most apps (intentionally or unintentionally) ignore the user's failure state in their design.

   edit   deselect   + to AI

 

- If an app enables the user to accomplish only one goal, then the user is lost as soon as the one goal disappears or is accomplished better by using some other product.

   edit   deselect   + to AI

 

- This is often because the designers aren't even paying attention to what a failure state looks like.

   edit   deselect   + to AI

 

- By nature, a goal represents a discrepancy between a person's present state and their desired reality. Failure is a natural part of the process. If it wasn't, then the [[intention-behavior gap]] wouldn't be an issue.

   edit   deselect   + to AI

 

- A fundamental reason why people fail at goal achievement and at maintaining long term behavior change is because they fail to recover and retry.

   edit   deselect   + to AI

 

- Apps should recognize the points where their users are likely to fail and design them to reduce the likelihood of failure, increase the likelihood that they retry, or make failure still beneficial in some way.

   edit   deselect   + to AI

 

- This is currently a [[stub]], but you can check my page on failure states for more details.

   edit   deselect   + to AI

 

- [[Behavioral Science of Great Gamification]]

   edit   deselect   + to AI

 

-

   edit   deselect   + to AI

 

-

   edit   deselect   + to AI

 

- My responses have #c and are indented one level deeper than the base text. If you want to play around with the functionality of Roam in a fun way and see just the threads that have my comments, you can use **page filtration** by clicking on the button just to the right of the search bar and then clicking on `c`

   edit   deselect   + to AI

 

-

   edit   deselect   + to AI

 

- Remember, any time you see ++, that means I've used a markdown alias on a block or page reference and you can hover your mouse over it for a second and see the text I'm linking to. For more tips, see [[how to navigate [[Roam]]]]

   edit   deselect   + to AI

 

- #c **General thoughts on the article:**

   edit   deselect   + to AI

 

- I’d agree with the claim that the product has to be useful, but I’d say that’s the bare minimum requirement rather than being the sufficient requirement of the product.

   edit   deselect   + to AI

 

- I also agree with the sentiment that gamification shouldn't be used to sugarcoat bad design.[++]([[The user should believe that the app will help them achieve a goal that they actually have]])

   edit   deselect   + to AI

 

- I would also claim that gamification/[[behavior design]] ("I sort of just mix behavioral science and game design to see what comes out the other end") can be part of what makes the app useful by helping the user to overcome the [[intention-behavior gap]].

   edit   deselect   + to AI

 

- I disagree with her claim that it is good for x situation or bad for y situation, because [[[[gamification]] is not one monolithic thing]].

   edit   deselect   + to AI

 

- Articles like this make me feel like I’m some sort of "gamification apologist” but maybe the problem is that the way that I conceptualize gamification is fundamentally different from the way that gamification is generally conceptualized[++](((alHJo1-L8))) so I’m generally just not talking about the same thing as the people who write these articles. She is arguing against what I refer to as [[lazy [[gamification]]]]. I wish I could just make up a new word and run with that without having to re-educate an entire market, but this is what people search for online.

   edit   deselect   + to AI

 

- **Full text, comments interwoven and indented:**

   edit   deselect   + to AI

 

- # Can we forget about gamification once and for all?

   edit   deselect   + to AI

 

- ![Screen from an old, 8-bit game saying "Game over” and asking user to play again](https://miro.medium.com/max/1400/1*5-sWB0xdoYlH-dyyQqJ_2A.jpeg)

   edit   deselect   + to AI

 

- If I got a dollar every time I hear about gamification in a client meeting, I would be rich. Very rich. And the funny thing is, I don’t really work on consumer apps. I usually work with big (or potentially big) systems targeted at corporate clients. No matter the target, 80% of initial meetings or plannings include the word that makes me gag: **the holy "gamification.”**

   edit   deselect   + to AI

 

- My usual approach was not to prove that this is a wrong approach, but rather to quietly deliver user flows, wireframes, and designs in a way that I thought was alright. To provide the biggest value for the user and satisfy the client. But I am tired.

   edit   deselect   + to AI

 

- And don’t get me wrong. Gamification can work. There are many great uses, education and healthcare being the most prominent. I am here to critique the notion of gamification for the sake of gamification. I am here to critique this weird notion that apps can’t be engaging if there is no gamification. I want to critique the fact that we let clients talk about gamification and comply with their wishes instead of educating them that apps have to be useful to be engaging, and no bells and whistles will change that if the user doesn’t need the product.

   edit   deselect   + to AI

 

- # **What exactly is gamification?**

   edit   deselect   + to AI

 

- Let’s get back to academics because it is my firm belief that we read too many blogs and success stories and not enough research. Academically, two texts defined gamification:

   edit   deselect   + to AI

 

- Deterding et al. (2011) say that [[gamification]] is the use of game design elements in nongame-contexts. #definition

   edit   deselect   + to AI

 

- Huotari & Hamari (2012) define [[gamification]] as "__a process of enhancing a service with affordances for gameful experiences to support user’s overall value creation.”__ #definition

   edit   deselect   + to AI

 

- The latest definition by Landers et al. (2018) makes it roll off the tongue more easily by stating that gamification is using game-like elements to make nongame tasks more interesting.

   edit   deselect   + to AI

 

- Let’s pause for a second and think about Huotari and Hamari, though: **enhancing a service to support overall value creation**.

   edit   deselect   + to AI

 

- As Landers (2018) wrote in his paper (aptly titled Gamification Misunderstood: How Badly Executed and Rhetorical Gamification Obscures Its Transformative Potential), there are two types of gamification: **legitimate** and **rhetorical**. Legitimate gamification has a transformative potential when used skilfully. Rhetorical gamification __"is at best novice gameful design and at worst a swindle, an attempt to make something appear "game-like” purely to sell more gamification.__”

   edit   deselect   + to AI

 

- Landers points to one more important thing: rhetorical, fake gamification comes from a misunderstanding of the term. Sometimes it is just another reincarnation of organizational games, strategy games, serious play, etc.; sometimes it’s an all-encompassing term for engagement.

   edit   deselect   + to AI

 

- Surprisingly, given the hype that it received in previous years, gamification didn’t amass significant research outside of education and healthcare. I went through two big meta-analyses of existing research so that you don’t have to, and let me tell you: **findings on the effectiveness of gamification are inconclusive at best.**

   edit   deselect   + to AI

 

- #c "Questions like "is gamification effective?" miss the point."

   edit   deselect   + to AI

 

- "When I hear people say "gamification does or doesn't work," I have the same reaction that many people would have if someone says "design doesn't work." Gamification isn't one thing as I define it, but rather an interplay between game design, human computer interaction, behavioral science, and [[behavior design]]."

   edit   deselect   + to AI

 

- "Games aren't asking themselves whether they should use a leaderboard or not. They are [asking themselves how people should/could be playing the game](((vq7ICOkbp))), and how mechanics fit into an overall system meant to deliver an experience is far more interesting."

   edit   deselect   + to AI

 

- "Games aren't made up of the additive effect of a bunch of individual mechanics working independently from each other. Instead, they function as a system of interacting parts where the player expresses agency in how they play."

   edit   deselect   + to AI

 

- # **Gamification as a mental shortcut**

   edit   deselect   + to AI

 

- Oftentimes clients come to us, and they know that people use only a few apps on their phones. They say that their new product has to be engaging, so they want an element of gamification. For that, I blame Medium articles, Harvard Business Review, content marketers, and the likes. I blame consultancies for doing a disservice not only to pure, well-executed gamification but also to clients. Clients are allowed to make this shortcut, but it is our duty to educate them. **This is why they hire us.**

   edit   deselect   + to AI

 

- Talking about the gamification **blurs the vision of our clients**. I don’t blame them, they want their apps to be used. But it is not the way! It blurs the vision in our discussions because once gamification is mentioned, discussion shifts to finding ways to make sure that users get back. It shifts to famous "hooks” taken straight from Nir Eyal (I hate this book).

   edit   deselect   + to AI

 

- To quote Bogost from his text ["Why Gamification is Bullshit”](http://bogost.com/writing/blog/gamification_is_bullshit/):

   edit   deselect   + to AI

 

- __"Gamification is reassuring. It gives Vice Presidents and Brand Managers comfort: they’re doing everything right, and they can do even better by adding "a games strategy” to their existing products, slathering on "gaminess” like aioli on ciabatta at the consultant’s indulgent sales lunch.Gamification is easy. It offers simple, repeatable approaches in which benefit, honor, and aesthetics are less important than facility. For the consultants and the startups, that means selling the same bullshit in book, workshop, platform, or API form over and over again, at limited incremental cost.”__

   edit   deselect   + to AI

 

- The thing is, if gamification worked, we would see much more of it.

   edit   deselect   + to AI

 

- Your financial app doesn’t have to be gamified. It has to be useful. Your application for assessing the risk of delayed payments doesn’t have to be gamified. No person who is not an accountant needs to open your accounting app every day. **It is enough that it will be useful.**

   edit   deselect   + to AI

 

- #c Sort of true, but I disagree with the claim that "it is enough that it will be useful” because otherwise the [[intention-behavior gap]] wouldn’t be a problem. I would rephrase that claim as **"Being useful is the bare minimum requirement of a good product.”** That could be a whole blog post in itself, but I think that bridging the intention-behavior gap from there is part of the role of the [[behavior designer]]

   edit   deselect   + to AI

 

- And what clients don’t realize — and they don’t have to, it is our role — gamification comes from times when it was ok to get users addicted to apps. When we wanted to maximize the time spent in consumer apps without looking at consequences. We tried to hook them at all costs and keep them coming back.

   edit   deselect   + to AI

 

- #c Reducing friction[++](((aIyhy7DSz))) is a perfectly valid way of approaching the task of bridging the intention-behavior gap. The problem the author is citing goes back to the idea of just focusing on the wrong end-goal for the behavior change. "Gamification” is just one way to approach that end goal.

   edit   deselect   + to AI

 

- I think that it is now clear how exploitative that is. It makes us focus on the wrong things. **It makes us focus on bells and whistles instead of improving flows, making it easier to get a job done.**

   edit   deselect   + to AI

 

- The closest those apps will get to actual gamification is if they create a leaderboard of their employees to push them to work more, for longer and under more pressure. Do you really want to build that, though?

   edit   deselect   + to AI

 

- And if we ask a corporation for, I don’t know, a loyalty program to create something resembling gamification? They don’t have it. Creating it or modifying existing programs would take too much time, and it is too complicated. The responsibility to make apps gamified is on designers and consultants then. We have to figure out how to make people collect points that don’t make sense. We apparently have to give them badges for the most rudimentary tasks. **Why would I provide a badge for an adult just because he logged into your app?** It’s patronizing.

   edit   deselect   + to AI

 

- # **Gamification is a smokescreen**

   edit   deselect   + to AI

 

- Innovation theatre, that’s what it is. We (us, clients, everyone in tech really) feel this pressure to be innovative, to show that innovation happens. Everything is called innovation, even if it’s pure window dressing on our part. Companies make a simple app or digitize a legacy part of the business and call it innovation even though it’s an industry standard.

   edit   deselect   + to AI

 

- It has to be engaging, so it has to be gamified. Clients don’t really care about usefulness. The fact that they don’t use apps that they don’t need somehow doesn’t register.

   edit   deselect   + to AI

 

- Whether our products tap into an existing need or create one is a different question. Still, it is not done with gamification. It is not done by addicting users to apps that are supposed to help them work or manage their documents or whatever basic task they have to do.

   edit   deselect   + to AI

 

- __**We end up with a funny way to solve a problem that is not worth solving, set wrong KPIs to measure success, and wonder why it failed.**__

   edit   deselect   + to AI

 

- #c I don’t think the problem is with gamification, it’s with focusing on problems not worth solving.

   edit   deselect   + to AI

 

- I know that we have to sell our services and that talking clients out of an idea is tricky. **But we have a moral obligation to right the wrongs and to deconstruct the hype**.

   edit   deselect   + to AI

 

- Let’s educate.

   edit   deselect   + to AI

 

- If it’s not possible to change minds, do what you do — design without gamification or add a progress bar somewhere and call it a day. But please, let’s have those discussions. Let’s educate people about design and engagement, and moral obligations that we have towards users and wider humanity because **design happens without designers**. Design happens everywhere, and any time someone makes product decisions. Those people who you think don’t understand the design, and you can’t stomach the idea of changing their minds, **they will make decisions with or without you**. Let’s help them make better decisions.

   edit   deselect   + to AI

 

- I will try to.

   edit   deselect   + to AI

 

- [[Author]]: [[Josh Elman]]

   edit   deselect   + to AI

 

- [[retention]] [[metric]] [[user involvement]]

   edit   deselect   + to AI

 

- source:: https://mixpanel.com/innovators/the-only-stickiness-metric-that-matters/?utm_medium=email&utm_source=product-update&utm_campaign=innovators_article&utm_content=Innovators&utm_term=

   edit   deselect   + to AI

 

- source:: https://mixpanel.com/innovators/the-only-stickiness-metric-that-matters/?utm_medium=email&utm_source=product-update&utm_campaign=innovators_article&utm_content=Innovators&utm_term=

   edit   deselect   + to AI

 

- # [[reading question]] What makes great products stick?

   edit   deselect   + to AI

 

- #c I would say:

   edit   deselect   + to AI

 

- There is a close relationship between the app and the user.

   edit   deselect   + to AI

 

- See [[The role of the user and the role of the app]], especially -mLEeUPQp

   edit   deselect   + to AI

 

- People deeply care about the app because it's been incorporated into their lives on a deeper level. High [[goal resonance]], personal ownership,[++]([[loss aversion]])[**]([[endowment effect]]) [[[[identity]] resonance]], and expectations are exceeded

   edit   deselect   + to AI

 

- "definition:: User involvement happens when the person is using your service to do the behaviors that make it awesome when their context calls for it. We're essentially influencing user behavior so people do what makes both them and your business happy.​"

   edit   deselect   + to AI

 

- #quote [[Josh Elman]] "Deeply understanding the individual use cases of everyone who touches the product, and what success means for them"

   edit   deselect   + to AI

 

- The two use cases he points to for LinkedIn - they wanted users in either category to be successful

   edit   deselect   + to AI

 

- Find

   edit   deselect   + to AI

 

- Be Found

   edit   deselect   + to AI

 

- This approach is similar to what we've done with [[Fabriq]]. I'd say it's still uncertain whether we've identified the right goals, but probable.

   edit   deselect   + to AI

 

- #quote [[Josh Elman]] they build loyalty at the outset

   edit   deselect   + to AI

 

- He's saying the same thing that I'm saying, which is that "[[onboarding]] is an opportunity to gain [[user involvement]] and information early."

   edit   deselect   + to AI

 

- He said: "You have their attention - don't waste it." This relates to what I've said: "[[[[upfront onboarding]] is a shared experience across users]], whereas anything that happens after is not universal."

   edit   deselect   + to AI

 

- #quote Josh Elman "Your goal should be to maximize the predictability that users will want to re-engage with your product"

   edit   deselect   + to AI

 

- Re - [[continuous onboarding]], he says "Onboarding goes beyond the first setup screen, or even the first session—^^it’s an ongoing process that continues until someone is using the product effectively on their own."^^

   edit   deselect   + to AI

 

- The main area that's interesting to me about that is where he says that it goes until people are using the product effectively on their own.

   edit   deselect   + to AI

 

- ## [[reading question]] How do we define effective use? This might be where my tension with the author is

   edit   deselect   + to AI

 

- So I think there's a distinction to be made between somebody who's using the product effectively for their current use case and someone who's using the product in a way that fills in their potential use cases. I think that a continuous onboarding should be progressively pushing the user towards greater use cases for themselves, eventually reaching towards [[user involvement]]

   edit   deselect   + to AI

 

- This needs to be balanced against other priorities. In many cases, it may be enough to simply have effective use for the use cases that users currently imagine, especially for apps where the core use cases are essentially prepackaged for the user (think about the difference between [[Substack]] and building your own blog). Once you start working with more flexible [[building block]] apps, you need to go beyond that. Especially if it's a [[horizontal product]] and basic (but successful) usage is vulnerable to competition copying the basic [[data structure]].[++](((mSd9rPhLr)))

   edit   deselect   + to AI

 

- #quote [[Josh Elman]] they give users a reason to come back.

   edit   deselect   + to AI

 

- ## [[reading question]] How might we give people a reason to come back to the app?

   edit   deselect   + to AI

 

- Before I read the rest of his claim, I just want to unpack how I perceive that

   edit   deselect   + to AI

 

- Why would we expect users to open up an app if they have no reason to do so?

   edit   deselect   + to AI

 

- How might we give users a reason to want notifications, or tap into the reasons they already might have?

   edit   deselect   + to AI

 

- A lot of apps think about "how can we get people to do this thing" but that's already fighting an uphill battle. "It is easier to facilitate people doing something that they want to do than it is to convince them to do something they don't want to do".

   edit   deselect   + to AI

 

- He says to look for the opportune moments to send people notifications

   edit   deselect   + to AI

 

- Time - Is there a moment in time that's relevant to the app? For [[example]], the sudden deprivation of face-to-face interactions given [[COVID-19]] makes [[Fabriq]] more relevant to people's lives, giving an opportunity for crafted messaging.

   edit   deselect   + to AI

 

- Within the product - Bring the user back to some activity in the app

   edit   deselect   + to AI

 

- This was mainly what I was talking about in my unpacking above.

   edit   deselect   + to AI

 

- # [[reading question]] What metrics matter for retention?

   edit   deselect   + to AI

 

- My thoughts before reading this section:

   edit   deselect   + to AI

 

- [[The role of the user and the role of the app]]

   edit   deselect   + to AI

 

- Are users doing the behaviors that lead them to be successful?

   edit   deselect   + to AI

 

- Is the company transforming the behaviors that users do as effectively as possible into value for the user?

   edit   deselect   + to AI

 

- He says:

   edit   deselect   + to AI

 

- Don't get lost in [[vanity metric]]s

   edit   deselect   + to AI

 

- #quote Josh Elman "But at the end of the day, your job is to focus on what these numbers tell you about users who are most likely to stick."

   edit   deselect   + to AI

 

- I love this quote. It's not about the numbers. It's about the meaning that we derive from the numbers, in concert with any other sorts of evidence that we are collecting.

   edit   deselect   + to AI

 

- With LinkedIn and Twitter, he knew that people needed to set up their timelines well in order to be successful users. That meant that a key behavior is following and connecting with the right people.

   edit   deselect   + to AI

 

- I wonder if there should be some added #friction to adding connections on LinkedIn? If people have too many connections, then that means they will likely end up with a crappy and diluted timeline.

   edit   deselect   + to AI

 

- I wonder if LinkedIn and Twitter should also be reducing #friction and adding #fuel to increase the likelihood that people actually engage in timeline curation.

   edit   deselect   + to AI

 

- This article talked only about behaviors in the [[onboarding]].

   edit   deselect   + to AI

 

- It seems as though a simplistic view is being taken - users who do these behaviors during the onboarding are most likely to stick around long term. He may not actually believe this, but this is the impression I'm getting from what's conveyed in the article.

   edit   deselect   + to AI

 

- However, "[[[[user goal]]s change over time]], so the app must continue to satisfy changing goals to retain users"

   edit   deselect   + to AI

 

- "This means that the behaviors at the beginning of the user journey are more likely indicative of traits of users who are already likely to be successful."

   edit   deselect   + to AI

 

- "^^I claim that a necessary requirement of [[retention]] is that the app is able to adapt to changing user goals over time.^^"

   edit   deselect   + to AI

 

- "This means that we need to track how behaviors over time relate to retention, not just at the beginning. "

   edit   deselect   + to AI

 

- "Depending on the complexity of the app and different use cases, there may be different points in time when the user reaches [[user involvement]] "maturity""

   edit   deselect   + to AI

 

- There may be a time component

   edit   deselect   + to AI

 

- #quote [[Josh Elman]] "How many times did users perform a core action on the expected cycle?”

   edit   deselect   + to AI

 

- I think that a question that needs to be asked is "Does frequency matter?" In some cases, it does. Often, it doesn't and leads to [[vanity metric]]. Just be conscious of if it is actually important to user success.

   edit   deselect   + to AI

 

- #quote [[Josh Elman]] "Data, while essential, can cause us to think in averages rather than individuals. But every data point represents a person, and tells a story about how they’re interacting with your product"

   edit   deselect   + to AI

 

- Ugh, I love this.

   edit   deselect   + to AI

 

- Goes back to "I love this quote. It's not about the numbers. It's about the meaning that we derive from the numbers, in concert with any other sorts of evidence that we are collecting."

   edit   deselect   + to AI

 

- Speaks to the value of tracking [[user behavior]] and [[user goal]]s, because that allows us the ability to more effectively tell stories

   edit   deselect   + to AI

 

- The point about averages rather than individuals is also incredibly salient.

   edit   deselect   + to AI

 

- In behavioral science research, it's been the trend to look at averages rather than individuals, but that leads to overgeneralizations where we lose meaning by handwaving away [[individual difference]] through random selection of a "general" sample.

   edit   deselect   + to AI

 

- For [[example]], there is not a single bias or heuristic that affects 100% of the people tested. There's always some people that come up rational. This gets at the idea of individual differences being important, because some marketing message that uses that heuristic likely won't work.

   edit   deselect   + to AI

 

- I worry that this sometimes happens with products. Let's imagine that LinkedIn decides "If we can get people to add 100 connections during their first two weeks, then they're hooked. We know this because **on average** those are the users who have the highest retention."

   edit   deselect   + to AI

 

- But what if people who add 100 connections are **simultaneously** the group with the highest dropoff rate because for some people, that was just them being overambitious and they end up with a meaningless timeline? We need to do [[Bayesian Reasoning]] and also dig a bit deeper into the [[individual difference]]s beyond those averages.

   edit   deselect   + to AI

 

- E4iwV7wO2

   edit   deselect   + to AI

 

- A big idea that comes related to this article: what about asking "What story do we want the user to be able to tell about the app?" That likely helps identify key behaviors in a way that organizations might find more helpful. **So for the apps that you're working on, what stories do you want to hear from users?**

   edit   deselect   + to AI

 

- Digging in just a touch deeper: **How will you be able to track whether those stories are happening?**

   edit   deselect   + to AI

 

- o8h0jx4s9

   edit   deselect   + to AI

 

- hXj9ah1bb

   edit   deselect   + to AI

 

- {{[[TODO]]}}

   edit   deselect   + to AI

 

- annotations::

   edit   deselect   + to AI

 

- Source: https://www.joelonsoftware.com/2012/01/06/how-trello-is-different/

   edit   deselect   + to AI

 

- The biggest difference you’ll notice (compared to our previous products pitched solely at software developers) is that Trello is a totally [[horizontal product]]. Horizontal means that it can be used by people from all walks of life. Word processors and web browsers are horizontal. The software your dentist uses to torture you with drills is vertical.

   edit   deselect   + to AI

 

- [[vertical product]] is much easier to pull off and make money with, and it’s a good choice for your first startup. Here are two key reasons: It’s easier to find customers. If you make dentist software, you know which conventions to go to and which magazines to advertise in. All you have to do is find dentists. The margins are better. Your users are professionals at work and it makes sense for them to give you money if you can solve their problems.

   edit   deselect   + to AI

 

- Making a major [[horizontal product]] that’s useful in any walk of life is almost impossible to pull off. You can’t charge very much, because you’re competing with other [[horizontal product]]s that can amortize their development costs across a huge number of users. It’s high risk, high reward: not suitable for a young bootstrapped startup, but not a bad idea for a second or third product from a mature and stable company like Fog Creek.

   edit   deselect   + to AI

 

- #quote The great horizontal killer applications are actually just fancy [[data structure]]s.

   edit   deselect   + to AI

 

- #quote most people just used Excel to make lists. Suddenly we understood why Lotus Improv, which was this fancy futuristic spreadsheet that was going to make Excel obsolete, had failed completely: because it was great at calculations, but terrible at creating tables, and everyone was using Excel for tables, not calculations.

   edit   deselect   + to AI

 

- #quote Spreadsheets are not just tools for doing "what-if” analysis. They provide a specific [[data structure]]: a table. Most Excel users never enter a formula. They use Excel when they need a table. The gridlines are the most important feature of Excel, not recalc.

   edit   deselect   + to AI

 

- #quote Over the next two weeks we visited dozens of Excel customers, and did not see anyone using Excel to actually perform what you would call "calculations.” Almost all of them were using Excel because it was a convenient way to create a table.

   edit   deselect   + to AI

 

- #quote Word processors are not just tools for writing books, reports, and letters. They provide a specific [[data structure]]: lines of text which automatically wrap and split into pages.

   edit   deselect   + to AI

 

- #quote PowerPoint is not just a tool for making boring meetings. It provides a specific [[data structure]]: an array of full-screen images.

   edit   deselect   + to AI

 

- #quote Some people saw Trello and said, "oh, it’s Kanban boards. For developing software the agile way.” Yeah, it’s that, but it’s also for planning a wedding, for making a list of potential vacation spots to share with your family, for keeping track of applicants to open job positions, and for a billion other things. In fact Trello is for anything where you want to maintain a list of lists with a group of people.

   edit   deselect   + to AI

 

- **It’s delivered continuously.** Rather than having major and minor releases, we pretty much just continuously push out new features from development to customers. A feature that you built and tested, but didn’t deliver yet because you’re waiting for the next major release, becomes __inventory__. Inventory is dead weight: money you spent that’s just wasting away without earning you anything.

   edit   deselect   + to AI

 

- **It’s not exhaustively tested before being released**. We thought we could get away with this because Trello is free, so customers are more forgiving. But to tell the truth, the real reason we get away with it is because bugs are fixed in a matter of hours, not months, so the net number of "bugs experienced by the public” is low.

   edit   deselect   + to AI

 

- **We work in public**. The rule on the Trello team is "default public.” We have a public Trello board that shows everything that we’re working on and where it’s up to. We use this to let customers vote and comment on their favorite features.

   edit   deselect   + to AI

 

- #comment Public roadmap lets you set up a [[learning loop]]

   edit   deselect   + to AI

 

- **This is a "Get Big Fast” product, not a "Ben and Jerry’s” product**. See Strategy Letter I. The business goal for Trello is to ultimately get to 100 million users. That means that our highest priority is removing any obstacles to adoption.

   edit   deselect   + to AI

 

- **Trello is free.** The friction caused by charging for a product is the biggest impediment to massive growth. In the long run, we think it’s much easier to figure out how to extract a small amount of money out of a large number of users than to extract a large amount of money out of a small number of users.

   edit   deselect   + to AI

 

- **The API and plug-in architectures are the highest priority.** Another way of putting that is: never build anything in-house if you can expose a basic API and get those high-value users (the ones who are getting the most value out of the platform) to build it for you. On the Trello team, any feature that can be provided by a plug-in must be provided by a plug-in.

   edit   deselect   + to AI

 

- This page is adapted from a blog post that I made early in my career: https://uxplanet.org/whats-appening-adding-behavioral-science-to-minimalist-design-d179a67f2b75

   edit   deselect   + to AI

 

- Everyday, you have a million things that you could be doing with your free time. You could scroll through Facebook, watch Netflix, talk to a friend, the possibilities are endless. Since the Kindle reading experience (like most reader apps) is focused on letting you read rather than encouraging you to read, the only reason that a person will open Kindle is if they’ve decided at the moment to use it. In order to make reading come to mind and be somebody’s first choice of things to do, it needs to be made more engaging. This is a dynamic app, not a static book. We can do that.

   edit   deselect   + to AI

 

- Enhancing and enabling intrinsic motivation is one of the best ways to create long-term behavior change.

   edit   deselect   + to AI

 

- "According to [[self-determination theory]], our intrinsic motivation is based on three main drives for competence, autonomy, and relatedness.[[competence]] is about a need to improve our skills and knowledge. [[autonomy]] describes wanting to personally identify with the task at hand and be in control of our own actions. [[relatedness]] has to do with wanting to connect to others through our actions."

   edit   deselect   + to AI

 

- Though every person is, of course, unique and will likely weight the importance of each of these components differently, an imperfect rule of thumb is that the more of these that you can align with the app, the more intrinsic motivation you can build for your users.

   edit   deselect   + to AI

 

- Perhaps when you’re reading a book, you could see how many other people are reading the same book, making you feel like you’re a part of a community.

   edit   deselect   + to AI

 

- Perhaps you could see how many people have read the book already, and are rewarded for doing obscure readings.

   edit   deselect   + to AI

 

- If a certain part of the book is confusing or thought provoking, you could post a question or discussion topic. Once you finish a chapter, you could see the questions that other people posted and post your own responses, connecting you even further to the community. #[[[[self-efficacy]] can be augmented to be [[social [[self-efficacy]]]] with community support]]

   edit   deselect   + to AI

 

- Maybe if people like your answer, they could give it a smiley face.

   edit   deselect   + to AI

 

- If you get enough smiley faces, you could earn certain ranks, like "Top commenter.” This would allow you to demonstrate your [[competence]] both to yourself and others while giving and receiving support, creating a sense of belonging in your community.

   edit   deselect   + to AI

 

- Push notifications could let you know when other book club members post questions at parts that you’ve already read, driving you to answer them to show your [[competence]], gain a stronger grasp of the material, and connect with others. You could of course be notified when someone answers your questions, which causes you to open Kindle out of curiosity.

   edit   deselect   + to AI

 

- Additionally, you could be notified when others post questions within 50 pages of your current point, making you want to hurry up and get there so you can talk with your friend about it.

   edit   deselect   + to AI

 

- [[relatedness]] is strongest when you’re connecting to similar others, so a "Book Club” feature could let you form a group with friends and family. In a smaller group than before, you could leave questions and comments for the other book club readers to see and respond to. Since you feel a strong connection to the other members of the group, you want to be able to see and reply to their questions, driving you to read more.

   edit   deselect   + to AI

 

- By linking reading to a [social activity]([[relatedness]]), such as engaging with community questions and talking with your friends who are reading the same book, reading becomes something exciting that you aren’t just doing for yourself.

   edit   deselect   + to AI

 

- Push notifications are important because they cause us to think about the app and possibly open it. In the moments when we open the app, we draw connections between the situation we’re in and the app, and over time,[ some of those situations become triggers](https://www.youtube.com/watch?v=hVDN2mjJpb8) for thinking about and opening Kindle, ingraining it into our daily life.

   edit   deselect   + to AI

 

- For those that don’t want to read socially, [[autonomy]] and [[competence]] could be tied in by letting you set daily or weekly reading goals in terms of page numbers or time spent reading.

   edit   deselect   + to AI

 

- Since you create your own goals, you feel as though you’re in control of your own behavior.

   edit   deselect   + to AI

 

- When you complete your goals, you’re demonstrating your [[competence]] to yourself.

   edit   deselect   + to AI

 

- If you aren’t consistently reaching your goals, the app could ask you if you would like to reduce the size of your goal a little to make it more doable and let you win. #[[Games intentionally design for [[failure state]] recovery]]

   edit   deselect   + to AI

 

- If it told you how many days or weeks in a row where you attain your goals, this adds in the power of the [[endowment effect]] and [[loss aversion]], where we put in great effort to avoid losing what we own. Push notifications could then be added to say something to the tune of "You’ve kept up with your reading goals 6 days in a row! Would you like to keep the streak going?”

   edit   deselect   + to AI

 

- Note: My thoughts on streak counters have changed since originally posting the article. See "[[example]] of Hollow Knight and [[streak counter]]s"

   edit   deselect   + to AI

 

- One missing element of this that I've noticed since originally writing is the role of identity in this. Should comments and questions be anonymous, or should you be able to gain familiarity with other users who are reading the same books as you? I lean towards a consistent identity, but there are challenges that would arise with scale.

   edit   deselect   + to AI

 

- Scale problems could be addressed by placing you into random small-medium sized groups of up to a few thousand people that are arranged to maintain at least a certain number of actively posting participants.

   edit   deselect   + to AI

 

- Behavioral Product Strategy is the art and science of understanding users better through the lenses of the behavioral sciences in order to make better product decisions. It can be ignored at your own risk, since [[Every app is designed for behavior change, intentionally or unintentionally]].

   edit   deselect   + to AI

 

- [[[[Expectancy Value Theory]] and its role in [[gamification]]]]

   edit   deselect   + to AI

 

- {{[[TODO]]}}

   edit   deselect   + to AI

 

- **In my work I deal with a flood of information and manage many concurrent projects. **I run a behavior design and gamification consultancy called Influence Insights. I’m also a behavioral product strategist for a startup studio called Spark Wave. In both roles, I work with multiple products simultaneously, applying what I learn and know about behavioral science, game design, and product design to apps so users are more likely to engage in the behaviors that make them successful in the app.

   edit   deselect   + to AI

 

- definition:: The feedback loop of applying behavioral science theory and findings to influence [[target behavior]]s[++](((CTx1I2--q))) in real world situations and learning from the outcomes.

   edit   deselect   + to AI

 

-

   edit   deselect   + to AI

 

- [[failure state]]

   edit   deselect   + to AI

 

- When user behavior doesn’t lead to user success, I’m tasked with understanding why said behavior isn’t already occurring, framing the problem in a useful way, and generating creative solutions.

   edit   deselect   + to AI

 

- It is important to design for behavior change because [[Products are fundamentally voluntary]] and [[Every app is designed for behavior change, intentionally or unintentionally]].

   edit   deselect   + to AI

 

- [[difficulty matching]]

   edit   deselect   + to AI

 

- I’m frequently taking in a lot of information from varied sources, while considering many diverse problems at once. I’m meeting with an assortment of different people who all possess their own interesting perspectives and questions. **As a result, my database is chaotic, but that chaos has just enough structure to enable systematic synthesis and ideation when needed.**

   edit   deselect   + to AI

 

- My goal in designing for behavior change is usually to increase [[user involvement]].

   edit   deselect   + to AI

 

- **In a recent project, I was tasked with designing the onboarding system for [GuidedTrack](https://www.guidedtrack.com/)**, a basic programming language/app that allows social science researchers with no coding experience to develop flexible experiments and basic apps. We’re doing an official launch soon (anybody can use it today, but we haven’t started marketing it yet). Before we launch, we want proper onboarding.

   edit   deselect   + to AI

 

- Roam, better than any other tool, allows me to allocate my attention in many different directions while ultimately consolidating when needed. For every project I work on, I take many sorts of notes:

   edit   deselect   + to AI

 

- Meeting notes, which are tagged with [[meeting notes]], the people who were involved, the date, and the topics of discussion within the outline.

   edit   deselect   + to AI

 

- Notes on my research into topics related to behavioral science and game design. These may or may not have been directly done for a specific project, but almost always end up useful.

   edit   deselect   + to AI

 

- Freewriting sessions, where I flexibly jump from thought to thought, topic to topic, and project to project, generating many ideas at once in Daily Notes. This is one of my favorite ways to work.

   edit   deselect   + to AI

 

- At this point, I have hundreds or even thousands of blocks that could help me come up with ideas about onboarding for GuidedTrack, many of which aren’t directly related to GuidedTrack. **It’s time to structure chaos**.

   edit   deselect   + to AI

 

- This means:

   edit   deselect   + to AI

 

- I need to narrow it down. There’s a massive amount of notes I could sift through and I have limited time, so I want to make sure my review time is as useful as possible.

   edit   deselect   + to AI

 

- This isn’t a sprint. Finding the best possible solutions to multifaceted questions takes days, even weeks. The point of this process is to provide inspiration/clarity, limit wasted review, and crucially, allow me to pick up where I left off.

   edit   deselect   + to AI