This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\setcopyright

acmcopyright \isbn978-1-4503-5621-3/18/04\acmPrice15.0015.00 https://doi.org/10.1145/3170427.3188397 \copyrightinfo\acmcopyright

“So, Tell Me What Users Want, What They Really, Really Want!”

Ulrik Lyngs


Reuben Binns



Max van Kleek



Nigel Shadbolt


Dept of Computer Science University of Oxford ulrik.lyngs@cs.ox.ac.uk Dept. of Computer Science University of Oxford reuben.binns@cs.ox.ac.uk Dept. of Computer Science University of Oxford max.van.kleek@cs.ox.ac.uk Dept. of Computer Science University of Oxford nigel.shadbolt@cs.ox.ac.uk
(2018)
Abstract

Equating users’ true needs and desires with behavioural measures of ’engagement’ is problematic. However, good metrics of ’true preferences’ are difficult to define, as cognitive biases make people’s preferences change with context and exhibit inconsistencies over time. Yet, HCI research often glosses over the philosophical and theoretical depth of what it means to infer what users really want. In this paper, we present an alternative yet very real discussion of this issue, via a fictive dialogue between senior executives in a tech company aimed at helping people live the life they ‘really’ want to live. How will the designers settle on a metric for their product to optimise?

keywords:
Preference elicitation; well-being; values in design; eudaimonic and hedonic UX.
category:
H.5.m. Information Interfaces and Presentation (e.g. HCI) Miscellaneous
conference: CHI’18 Extended Abstracts, April 21–26, 2018, Montreal, QC, Canada
© 2018 Association for Computing Machinery.

1 Introduction

The question of how to bridge the potential gap between what users actually do and what they ‘really wanted’ to do has a relatively long history in human-computer interaction research. In the 1960’s, Warren Teitelman’s ‘Do What I Mean’ (DWIM) philosophy argued that systems should not just execute whatever potentially erroneous instructions users put into a terminal  [31]. Instead, they should try to interpret users’ true intentions and correct their errors (the implication being DWIM, Now What I Say (or Do)). In practice, Teitelman’s error-correction systems were critiqued as merely reflecting what their designer would have meant (‘do what Teitelman means’) [30].

The issue crops up in more fundamental ways in the domains of decision support and recommender systems, where the gap is not just between what the user typed and what they really intended, but between recorded interaction behaviour and what can be inferred about the user’s wants and needs. On those rare occasions where HCI researchers in such fields venture into the moral minefield of defining ’what users really want’, they often provide definitions which are intuitively reasonable yet dis-satisfyingly cursory; the related philosophical debate is then swiftly swept under the carpet.

For instance, in an otherwise enlightening chapter, Jameson et al. offer the following on what decision-support systems should optimise for: “… a ‘good outcome’ [is] one that the chooser is (or would be) satisfied with in retrospect, after having acquired the most relevant knowledge and experience. Admittedly, this assumption is subject to debate…”  [15, p. 35].

Similarly, Pommeranz et al. write in relation to methods for eliciting user preferences: “More research is needed to design preference elicitation interfaces that elicit correct preference information from the user” [24, p. 361]. Later, they consider what normative aspects might be required to determine such ‘correctness’ of a preference and find answers to this question in short supply: “There is much room for more explicit consideration of human preference construction also including values and affective aspects” [24, p. 365].

Parallel questions are arising in recent AI research. For instance, techniques like ‘inverse reinforcement learning’ (IRL) attempt to infer an underlying goal function from behavioural output [22]. It is implied that such goal functions are equivalent to the ‘true desires’ of the human from whom the machine is learning. Indeed, as Stuart Russel put it, a well-aligned AI ‘will watch all of us to learn more about what it is that we really want”.111Talk at TED2017 https://www.ted.com/talks/stuart_russell_how_ai_might_make_us_better_people/transcript A common assumption in these techniques is that humans make optimal decisions, with deviations from optimality reflecting ‘random noise’ in action selection [12]. However, real human decision-making deviates systematically from optimality, because of cognitive biases like asymmetric perception of losses and gains, which make people sensitive to how identical outcomes are framed, or hyperbolic discounting of future rewards, which often make people inconsistent over time [32, 16]. A rational model may wrongly assume that the true preference of, for example, a smoker who is trying (and failing) to quit is to smoke [12].

Indeed, some recent work in this space acknowledges the limitations of the pure behaviourist dream. Armstrong et al. argue that current IRL methods are “fundamentally and philosophically incapable of establishing a ‘reasonable’ reward for the human”, which can only be overcome, they argue, by building in “normative assumptions about the reward and/or planner”  [3, p. 2]. However, the authors stop short of articulating what those normative assumptions might be.

An alternative approach is to devolve responsibility for those normative assumptions back to the user. Numerous ways of explicitly eliciting users’ preferences have been explored in the literature, including user ratings and example-critiquing in recommender systems [25, 24] and absolute measurement and pairwise comparison in decision support systems [2, 7]. Moreover, pioneering work in this space explored ways to let users inspect and tweak a system’s model of them [8, 19, 25, 18].

However, explicit methods also run into problems. Foremost, the elicitation process itself greatly influences what users say they want. For example, users may prefer different options based on whether they are framed as losses or gains [24], or depending on the moment in time in which they are asked. What point in time reflects what the user ’really’ wants - the most recent, a weighted average over the last day / week / year, or something else? [17, 26].

Related to the question of timeframes is the tension between broad or narrow construals of the user’s context. One of the foundational tenets of user-centred design, as articulated by Ritter et al., is to consider the user context more broadly  [27]. That is, moving beyond the immediate, task-related issues pertaining to a specific product, where the user’s goals can be more easily approximated, and instead view applications of technology as "the development of permanent support systems and not one-off products that are complete once implemented and deployed"  [27, p. 44]. In other words, the designer should consider longer-term effects of systems on people’s lives, which in turn requires deeper insight in order to align systems with users’ more general goals, values, and life situation.

The question of how to elicit ’correct’ preferences therefore ends up connecting with the philosophical debate on the fundamental constituents of ‘what makes someone’s life go best’. While it’s possible that one’s best life could be at odds with one’s desires, it is often held that being able to satisfy ‘true desires’ are at least partly constitutive of the good life. Most philosophers’ answers tend to involve one or more of the following: pleasurable experiences or ‘hedonism’ (e.g. [5]), where a good life is one full of pleasurable experiences; desire-satisfaction (e.g. [23]), where a good life is one in which one’s desires are fulfilled; and objective list theories, where a good life is one in which certain objectively worthwhile things are experienced, achieved or engaged in (e.g.  [14]). Hedonists often concede that certain hedonic pleasures are more truly constitutive of a good life, and proponents of desire-fulfillment acknowledge that the fulfillment of misguided desires contributes little to one’s best interests. As we hope to show, these philosophical debates underlie some of the conceptual difficulties encountered by the more practically-oriented research within the HCI / AI research cited above.

To illustrate this, we now present an alternative yet very real take on the issue, via a fictional dialogue between senior executives in a global tech company whose product aims to help people live their best possible life. How can they settle on the metric their product should optimise so that they give their users what they really want? In addition to the literature reviewed so far, the positions of our fictive designers are inspired by common views within classical economics and rational actor models [20], behavioural economics [32, 16], and philosophical and psychological work on the ’good life’ [28, 9], as well as common ideological stances. We acknowledge that this is only a portion of the relevant literature, and we have taken artistic license in our formulations of each stance. As such, we cannot claim our fictive characters to be fully and fairly representative of the possible range of positions. Furthermore, any resemblance to real persons is probably not accidental.

2 What Users Really Want: The Tale of Gamaface

SCENE: The Californian morning sun shines through the gleaming windows into a pristine meeting room at the headquarters of Gamaface, a global technology company. Gamaface is the industry leader in what they call ’algorithmic life services’. Their augmented reality platform - which combines decision support, persuasive computing, and ubiquitous personalised nudging - is used by 3 billion users 24 hours a day, 7 days a week.

CAST:

  • Sunny Zuckerbezos, CEO of Gamaface
    @Walkin_on_sunshine | Mission: connecting people and data to make life worth living.

  • Randy Na, Information Architect
    @Nozick_SoSick | Libertarian | Autono-me, autono-you | "I hold it to be the inalienable right of anybody to go to hell in his own way."

  • Harald Richter, User Researcher
    @WinkWinkNudgeNudge | Pavlov’s dog, striving to be Pavlov’s bell. #cognitivebias #positivepsychology

  • Nichola Machian, Lead Ethicist
    @eudaimonia_for_all | "Meaning arises when subjective attraction meets objective attractiveness"

2.1 ACT: In search of a metric

Sunny: So, I’ve called you all here to pitch ideas for a new metric that will drive the direction of our services for the future. As you know, millions of people rely on our technology to guide their every waking (and sleeping) minute. The Gamaface mission has always been about our users … giving them what they want, helping them to live better lives, the lives they really want to live. Across all our user’s devices - laptops, smartphones, smartwatches, and smart glasses - our activity feed suggests what they should do next to live their ideal life. We help them find stuff that’s ’relevant’, see their most ’engaging’ videos, and ping them ’helpful’ info when they need it.

But what does any of that actually mean? How can we be sure that we are giving users what they really want? What we need, my friends, is a clear answer to this question; a new metric towards which all our services should be geared; a new optimisation metric for life. So come on, hit me with your ideas!

Randy: I’m going to stop you right there, sir, if I may. What’s wrong with our existing systems? We infer what users want from what they do and what other people like them do. If they spend every spare second watching cat videos, then our algorithms should give them more cat videos. If they keep watching them, that means our algorithms got it right. If they don’t like them they will stop looking at them. Our algorithms will then show them less in the future …

Harald: Woah there. I totally disagree. People are slaves to simple reward functions inherited from our evolutionary past. We know how to hack these reward systems, so if we leave people to their own devices (no pun intended) they will simply do whatever our algorithms nudge them to do. That might be binge-watching cat videos and ordering takeout pizza. It probably won’t be filling in their tax returns or exercise …

Nichola: But we could be nudging them to do those things instead! Even better, we could nudge them to do something truly worthwhile, like reading poetry, or contributing to science, or meditating on the miracle of their very existence!

Randy: How patronising! As cosmopolitan, liberal, college-educated, Silicon Valley elites, who are you to decide what people should be ’nudged’ towards? Sounds pretty paternalistic to me!

Sunny: OK people, let’s work together here. On the one hand, Harald and Nichola are on to something: our algorithmic life services shouldn’t feed people’s worst habits. But Randy also has a point; Gamaface must remain a neutral platform, with no political, ethical or aesthetic biases. All our algorithms are just based on pure mathematics and user behaviour - not on force-feeding them a specific notion of the good life.

Randy: Exactly. The point of our service is to free people from the tyranny of the structures that control their lives - the government, social norms, the media - and let them do whatever they want to do. If 99% of people choose to indulge what you call their ‘worst habits’, that’s their prerogative and we should help them. Equally, the 1% who want to waste their time reading poetry are entitled to do so!

Nichola: Are you really saying you believe that there is no objective difference between the aesthetic value of cat videos and Shakespeare?

Randy: My opinion of the value of anything isn’t the point here. The point is that none of us get to decide that for someone else!

Nichola: Ah, but you’ll agree that each individual may value different activities or pursuits as ’higher’ and ’lower’? And if someone wants to pursue something they think is worthwhile - say, reading poetry - we should help them?

Randy: Mhmmm …

Nichola: So if they wish they would read more poetry, but find themselves watching cat videos, we should stop the cat videos and replace them with poetry!

Randy: Nope! People gotta live with the consequences of their failure to live up to their ideals!

Harald: Aha! You’re admitting that people might sometimes end up doing things they don’t really want to do! In fact, this is a well documented phenomenon in behavioural science. Our system 1 - the fast, powerful, routine animal part of our brain - wants one thing, and system 2 - the slow, rational, reflective human part - wants another. Unfortunately, even though people identify with their system 2, system 1 usually gets its own way.

Randy: Well, I don’t know about that system 1, 2 mumbo-jumbo - speak for yourself pal, my systems are all fine, thank you very much. I guess maybe sometimes people are conflicted about what they want … but even then, who are you to interpret which desire should prevail - which system represents what they really want?

Nichola: We shouldn’t decide between us in this room, but we could let the wisdom of science and philosophy decide! Psychologists have spent decades figuring out which activities and circumstances make people happy with their lives, and philosophers have pondered it for millennia. Shouldn’t our systems help people live the way experts say make a life go well?

Randy: That’s absurd! The world is changing faster than ever before, and you suggest that some dead navel-gazing philosophers know what’s best for people? Or maybe that we nudge everyone to get married if psychologists say married people are happier on average? What someone really wants is a function of their free will and their unique personality, and no-one besides themselves are in a position to know what’s best for them …

Nichola: Are you seriously denying that things like close friends, meaningful work, good physical health, and feeling competent and valued in your community aren’t universal constituents of a good life? It seems pretty obvious that people have fundamental - and universal - needs that must be satisfied for them to feel their lives are going well. We should help them get what will make them happy and fulfilled, even if they haven’t themselves realised what the important things in life are …

Harald: But people also want to feel that they’re in control of their lives - we can’t dictate to them what they should want. Hmm… but what we could do is help them reflect on what they want from life and then let them set the nudges they need from our products accordingly …

Randy: For God’s sake… People don’t want to live like saints! It’s all very well to meditate on what your ‘ideal self’ might want but sometimes people just want to indulge and we shouldn’t make them feel bad about that! Besides, who wants to be forced to reflect like some Buddha on your life? Most people are lazy.

Sunny: OK, there’s a lot of stuff here. Let’s come back to my original question to you all. Given everything you’ve said, what is the metric you think we should optimise, and how do we do it?

2.1.1 Metric #1: Engagement with preferred options

Randy: Fundamentally, we must believe that our users can choose for themselves what the good life is - anything else is frankly disrespectful. So our metric should be rooted in behaviour: If people engage more with the options we give them, it must, all else being equal, be because they find them valuable. ‘But what if they’re addicted’, you say? We correct for that by asking them now and then whether they are currently doing what they most want to be doing given the their options. Surely, if they’re addicted, their answer will be ’no’. So we use two metrics: what they actually do and what they in the moment say they most want to do. When we optimise both, then we give them what they really want.

So basically, we’re almost there. Our apps and augmented reality systems already put the options users engage with the most in front of their eyes and fingertips. But we build a little extra that we could brand Gamaface Autonomy SenseTM: a simple overlay that occasionally asks our users, on a scale from 1-10, to which degree they are currently doing what they most want to be doing given their options. We get responses across a user’s different activities, add some random noise to the activities we recommend, et voilà! We learn the activity schedule that optimises the user’s engagement with preferred options.

2.1.2 Metric #2: Regret when reflecting on past activity

Harald: But people often regret their past behaviour, even if they thought they wanted it in the moment! Really, what we should do is help them get as close to how they wish to be when they take the time to reflect. Therefore, our metric should be amount of regret when the user carefully considers his past activity. When we minimise this regret, then we are giving users what they really want.

So we build a system we brand Gamaface Deep Ideal Self LearningTM - this is guaranteed to play well with our audience! It lets the user review a timeline of her past behaviour. For each activity, or activity class, she rates on a scale from 1-10 how closely what she did matches what she’d like her ideal self to do, and/or how often she’d like her ideal self to do it. This will allow our algorithms to learn how to present activities and nudge her so that her actual behaviour gets closest to her ideal self.

Randy: This makes my skin crawl. First, it doesn’t matter what people would have chosen under ‘ideal’ reflective circumstances, whatever they might be (let me guess; sitting cross-legged like a Zen master while soothing dolphin noises play in the background?). Second, it’s disrespectful to override someone’s present desires in order to serve some inferred ‘ideal’ desires. Finally, the random, irrational, and downright messy nature of the human condition is what makes life worth living.

Nichola: Well, that explains all the garbage piled up around your desk. Harald, I think Randy is right that we shouldn’t place too much weight on what people regret about their past behaviour. Regret is an imperfect guide; the grass is always greener on the other side, and maybe that goes for deathbed lamentations too. Who’s to say that you wouldn’t have regretted an alternative life course even more? And if you don’t regret your actual life choices, maybe that’s because you don’t appreciate what could have been! Who knows which of all your possible future selves would have the ‘best life’?

Sunny: So are you saying it’s impossible?

Nichola: Well, not entirely. I think the solution is to draw on wisdom outside of the individual - putting the burden of figuring out what the good life is on the shoulders of each user is frankly setting them up for failure.

2.1.3 Metric #3.1: Similar, wiser users’ engagement and regret

Nichola: The first option is learn from our users’ aggregated wisdom, and use comparisons across different people’s real lives as they actually lived them to infer the necessary conditions for a happy and meaningful life. So we can use Randy’s and Harald’s suggested systems to collect data, but the goal is to create metrics that refer to the collective elicited preferences of similar users with more life experience instead of just whether the user himself thinks something is good for him. We are most likely to give users what they really want when we, based on the accumulated life wisdom of others, optimise expected engagement with preferred options and minimise expected subsequent regret. This system we call Gamaface Wisdom of AgeTM - a collaborative filtering approach to the good life, in which each user’s contribution is weighted by their experience. We set default choices and nudges of our youngest users based on the wisdom of more experienced users.

2.1.4 Metric #3.2: Alignment with guru-guided good life

The other option is to use some expert guidance on the life, or what we might call the ’wisdom of gurus’: then our metric is alignment between a user’s circumstance and what the guru system says leads to an enjoyable/meaningful life.

Certain conditions have been found by psychological research to make everyone miserable, so some of the guru reference values should be the same for everyone. On the other end of the spectrum, there might be many different ways to make a life go really, really well, so here we could let users choose which guru system they prefer, like their favourite philosopher, religion, or other ideology. We give users what they really want when we minimise the conditions in their lives that reliably make people miserable (e.g. loneliness) and optimise the conditions their guru system of choice says makes for an ideal life.

This system we call Gamaface Guru-Guided Good Life (G4L)TM. We involve different experts directly in creating the values and nudges. We could also offer users an exclusive option of having our system trained on a text corpus from their philosophy of choice. Imagine that users can bring their favourite Buddhist texts–or your Ayn Rand novels, Randy?–and our machine learning algorithms will learn its values and calibrate the user’s system accordingly?

Sunny: Well, I’m going to have fun explaining all this to our shareholders… Time for lunch!

3 Discussion

Given the problems with equating users’ true needs and desires with simple behavioural measures of ‘engagement’, alternative metrics are needed. In this paper, we have discussed this issue in the context of a dialogue between designers in a fictive tech company searching for a metric with which to measure whether they are giving users what they ‘really’ want.

Despite the often glib treatment of these questions in HCI research, as seen in the examples in the introduction, there are some notable exceptions. The general critique of the behaviourist tendency in recommender systems is well-articulated by Ekstrand & Willemsen [11]. They suggest one promising corrective, which allows users to choose between different algorithms underlying their recommender systems [10]. In allowing some direct control of the inference process, this is reminiscent of earlier work which aimed to create ‘scrutable’ user models which are transparent and configurable  [18, 19].

Harald’s call for systems which help put users in a position from which they can reflect on themselves and their desires, has some precedent in Slovak et al.’s proposal to design for ‘reflective practicum’; a state in which someone can engage in transformative revision of their outlook or behaviour  [29]. In addition, we suggest that empirical findings from positive psychology and reflections from the philosophy of what makes life go well will prove important as a source of more opinionated takes on how we might design to support people’s ‘better selves’ and draw on life wisdom that does not originate with the designer or individual user [28, 9, 6].

Nichola’s call to consider the meaningfulness of user experiences is something explored by Mekler and Hornbæk, who argue that UX designers should consider eudaimonic (‘living the good and virtuous life’) as well as hedonic (‘pleasurable’) experiences [21]. Similarly, Zimmerman proposes ‘designing for the self’, such that products can help their users become the people they desire to be [33].

For those sympathetic to Randy’s position, the notion of an external force guiding users’ search for the good life may detract from the sense in which an individual ought to be responsible for their own life journey. However, behavioural economists have found that the crucial question for whether people are in favour of ‘nudging’ –in the sense of setting up the choice environments to support particular kinds of behaviour– is whether or not they agree with the vision of the good life behind those nudges [13]. We can easily imagine a future in which users expect the right to choose which normative assumptions should be embodied in the algorithms feeding their recommender systems.

A central goal of this paper has been to show that whereas it is easy to criticise tech companies for equating users’ true preferences with simple metrics of engagement, it is not obvious what good alternative metrics look like. Every metric implicitly embodies particular assumptions about human nature - and ultimately about the good life - with few decisive arguments for favoring one as the ’best’. It is a fallacy, however, to conclude that the question is therefore not worth bothering with and/or that all metrics are equally valid. Some ways of solving the problem are clearly worse - such as simply equating true preference with number of clicks or time spent using a service - even if there is no way to tell which of the better alternatives might be the global optimum. We believe that allowing users the to choose between different ways for a system to infer their preferences - for example, implementations of Randy’s, Harald’s, and Nichola’s positions - would be a big step forward. To reach such a future, we need to properly engage with the issue and explore, build, evaluate, and discuss what good alternatives look like.

Some readers may still feel that this discussion is akin to moral philosophers’ "trolley problem" - important in principle, but not terribly relevant in practice [4]. However, even though recommender and decision-support systems typically used today are limited in scope to specific task and interest contexts, more sophisticated systems could begin to model their longer term effects on user’s lives, and user’s preferences towards such influence. In fact, such a future might rapidly be approaching. Addressing recent criticisms of Facebook, CEO Mark Zuckerberg recently announced that his personal challenge for 2018 included "making sure that time spent on Facebook is time well spent" 222Facebook post on 4th January 2018 https://www.facebook.com/zuck?hc_ref=ARQqfRj278TDWekby2TLyI0A0meA4
-4PxqohaalwAfzCeAsMaft16fKBkDYiHEg4cQk&fref=nf
. Following up a week later, Zuckerberg elaborated that his team felt "a responsibility to make sure our services aren’t just fun to use, but also good for people’s well-being" and that he would be "changing the goal [of] our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions"333Facebook post on 11th January 2018 https://www.facebook.com/zuck?hc_ref=ARTYPggwbi_dcIl5f8M8r1dGZYZGhpWmAXwj_9C6g6mmSCSxA0dpxqWqEaPojN1IWD0&fref=nf. Time will tell which, if any, of the paths suggested by Randy, Harald, and Nichola will be taken by world-leading social media platforms.

References

  • [1]
  • [2] John A. Aloysius, Fred D. Davis, Darryl D. Wilson, A. Ross Taylor, and Jeffrey E. Kottemann. 2006. User acceptance of multi-criteria decision support systems: The impact of preference elicitation techniques. European Journal of Operational Research 169, 1 (2006), 273–285.
  • [3] Stuart Armstrong and Sören Mindermann. 2018. Impossibility of deducing preferences and rationality from human policy. (jan 2018). http://arxiv.org/abs/1712.05812
  • [4] Christopher W. Bauman, A. Peter McGraw, Daniel M. Bartels, and Caleb Warren. 2014. Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology. Social and Personality Psychology Compass 8, 9 (sep 2014), 536–554.
  • [5] Jeremy Bentham. 1996. The collected works of Jeremy Bentham: An introduction to the principles of morals and legislation. Clarendon Press.
  • [6] D Buss. 2000. Evolution of Happiness. American Psychologist 55, 1 (2000), 15–23.
  • [7] Li Chen and Pearl Pu. 2004. Survey of Preference Elicitation Methods. Technical Report. 1–23 pages.
  • [8] R. Cook and Judy Kay. 1994. The justified user model: A viewable, explained user model. Fourth International Conference on User Modeling September 2016 (1994), 145–150.
  • [9] Ed Diener and Martin E.P. Seligman. 2002. Very Happy People. Psychological Science 13, 1 (2002), 81–84.
  • [10] Michael D. Ekstrand, Daniel Kluver, F. Maxwell Harper, and Joseph A. Konstan. 2015. Letting Users Choose Recommender Algorithms. In Proceedings of the 9th ACM Conference on Recommender Systems - RecSys ’15. 11–18.
  • [11] Michael D Ekstrand and Martijn C Willemsen. 2016. Behaviorism is Not Enough: Better Recommendations through Listening to Users. Proceedings of the 10th ACM Conference on Recommender Systems - RecSys ’16 (2016), 221–224.
  • [12] Owain Evans, Andreas Stuhl, Noah D. Goodman, Andreas Stuhlmüller, and Noah D. Goodman. 2016. Learning the Preferences of Ignorant, Inconsistent Agents. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (2016), 323–329.
  • [13] William Hagman, David Andersson, Daniel Västfjäll, and Gustav Tinghög. 2015. Public views on policies involving nudges. Review of Philosophy and Psychology 6, 3 (2015), 439–453.
  • [14] Thomas Hurka. 1993. Perfectionism. New York. (1993).
  • [15] Anthony Jameson, Bettina Berendt, Silvia Gabrielli, Federica Cena, Cristina Gena, Fabiana Vernero, Katharina Reinecke, and others. 2014. Choice architecture for human-computer interaction. Foundations and Trends® in Human–Computer Interaction 7, 1–2 (2014), 1–235.
  • [16] Daniel Kahneman, Jack L Knetsch, and Richard H Thaler. 1991. Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias. The Journal of Economic Perspectives 5, 1 (1991), 193–206.
  • [17] Daniel Kahneman and Jason Riis. 2005. Living, and thinking about it: Two perspectives on life. In The Science of Well-Being, F. A. Huppert, N. Baylis, and B. Keverne (Eds.). Oxford University Press, New York, 285–304.
  • [18] Judy Kay. 1995. The um toolkit for cooperative user modelling. User Modeling and User-Adapted Interaction 4, 3 (1995), 149–196.
  • [19] Judy Kay. 1997. Learner know thyself: Student models to give learner control and responsibility. International Conference on Computers in Education 10 (1997), 17–24.
  • [20] Daniel McFadden. 1999. Rationality for Economists? Journal of Risk and Uncertainty 19, 1 (01 Dec 1999), 73–105.
  • [21] Elisa D. Mekler and Kasper Hornbæk. 2016. Momentary Pleasure or Lasting Meaning?: Distinguishing Eudaimonic and Hedonic User Experiences. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4509–4520.
  • [22] Andrew Ng and Stuart Russell. 2000. Algorithms for inverse reinforcement learning. Proceedings of the Seventeenth International Conference on Machine Learning 0 (2000), 663–670.
  • [23] Derek Parfit. 2012. What makes someone’s life go best. Ethical Theory: An Anthology 13 (2012), 294–298.
  • [24] Alina Pommeranz, Joost Broekens, Pascal Wiggers, Willem Paul Brinkman, and Catholijn M. Jonker. 2012. Designing interfaces for explicit preference elicitation: A user-centered investigation of preference representation and elicitation process. User Modeling and User-Adapted Interaction 22, 4-5 (2012), 357–397.
  • [25] Pearl Pu and Li Chen. 2009. User-involved Preference Elicitation for Product Search and Recommender Systems. AI Magazine 29, 4 (2009), 93.
  • [26] Donald A. Redelmeier and Daniel Kahneman. 1996. Patients’ memories of painful medical treatments: Real-time and retrospective evaluations of two minimally invasive procedures. Pain 66, 1 (1996), 3–8.
  • [27] Frank E Ritter, Gordon D Baxter, and Elizabeth F Churchill. 2014. User-centered systems design: a brief history. In Foundations for designing user-centered systems. Springer, 33–54.
  • [28] Martin E. P. Seligman, Tracy A. Steen, Nansook Park, and Christopher Peterson. 2005. Positive Psychology Progress: Empirical Validation of Interventions. American Psychologist 60, 5 (2005), 410–421.
  • [29] Petr Slovak, Christopher Frauenberger, and Geraldine Fitzpatrick. 2017. Reflective Practicum: A Framework of Sensitising Concepts to Design for Transformative Reflection. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2696–2707.
  • [30] Guy L. Steele and Richard P. Gabriel. The evolution of Lisp. In History of programming languages—II. ACM, 233–330.
  • [31] Warren Teitelman. 1966. PILOT: A Step Toward Man-Computer Symbiosis. Ph.D. Dissertation. MIT.
  • [32] Amos Tversky and Daniel Kahneman. 1974. Judgment under Uncertainty: Heuristics and Biases. Science (New York, N.Y.) 185, 4157 (sep 1974), 1124–31.
  • [33] John Zimmerman. 2009. Designing for the Self: Making Products That Help People Become the Person They Desire to Be. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09). ACM, New York, NY, USA, 395–404.