Rightly or wrongly, people often get annoyed with disagreeable and critical people. One common argument is that they’re being disagreeable for strategic reasons: to “seem smart”, or something similar. But I think such strategic disagreeableness is less common than it might seem. Instead, I think that people who are critical of some claim or argument usually are so simply because they feel it’s flawed. I think it’s much more common that people are being nice and warm for strategic reasons, than that they’re being critical for such reasons. A poll I ran gives some evidence that strategic niceness is more common: [See thumbnail] It also seems to make more sense. People tend to like nice and warm people, which gives an obvious incentive to pretend to be nice and warm. It can solve many situations which could have become quite tricky if you had expressed the less warm emotions that you genuinely have. (I can easily recall many such situations from my life.) By contrast, people are decidedly more ambivalent about disagreeable and critical people. It’s true that some people can find their criticisms impressive (depending on context), but in many cases, any such effect will be outweighed by other people finding them annoying. Thus it seems to me there’s much less of an incentive to be critical and disagreeable. In line with that, self-help books like Dale Carnegie’s How to Win Friends and Influence People tend to advise readers to be nice and likeable, rather than to be critical. (Granted, these books likely have a social desirability bias towards giving such advice, but it still seems like some evidence.) Indeed, I think that being critical and disagreeable often incurs a net personal loss; and that people who have critical impuluses often recognise that and actively try to resist them. In this respect, they may be somewhat akin to violent impulses (though on a very different scale, of course). It tends to pay off to please others, whereas displeasing them through criticism or, in particular, violence often does not. (Perhaps the environment of evolutionary adaptedness was different in this regard - which would explain why we have these disagreeable and violent impulses - but in modern society that seems to be true, at least.) In fact, I think people who say that others are being critical for strategic reasons often do so not because the evidence suggests it to be the case, but because it’s an effective rhetorical technique. It’s an ad hominem-attack that makes critical people look insincere. But even though critical people often overdo it, I think they’re typically not insincere. https://lnkd.in/em3XPWb8
Stefan Schubert’s Post
More Relevant Posts
-
"But most people tend to focus on the spotlight question when they argue, and fail to consider how their arguments and their use of language affect other issues. In effect, they engage in what can be called epistemic discounting. Just as we value present benefits more than future benefits—temporal discounting—so we value our arguments’ epistemic effects on the spotlight question more than their epistemic effects on other questions.* In my view, we should correct for this tendency. We engage in too much epistemic discounting, much like we engage in too much temporal discounting. Instead, we should argue in a way that has positive indirect effects beyond the spotlight question. Specifically, we should: * Make sure supporting arguments are actually correct, whether or not they support our view on the spotlight question * Go beyond the minimum levels of clarity necessary from the perspective of the spotlight question * Use technical or standard terms where available * Follow good epistemic norms that have positive downstream consequences It’s likely no coincidence that these points are all old-school scientific/academic ideals. Science has developed norms that are consistent with a lower rate of epistemic discounting than the human default. It’s not as myopically focused on the spotlight question, but takes a more zoomed-out perspective. (This is not to say that contemporary academics couldn’t improve in this regard, though.)" https://lnkd.in/dAT9XUp8
Against epistemic discounting
stefanschubert.substack.com
To view or add a comment, sign in
-
"When others disagree with us, we often have reason to update our beliefs in their direction. We should defer to our epistemic peers (and superiors). And yet we often fail to do so to the extent that we should. We suffer from egocentric discounting: we put too much emphasis on our own inside views relative to the views of others. This has been much discussed. But there’s a related tendency that’s not discussed as much (as far as I can tell, but correct me if I’m wrong), namely our tendency not to engage with the ideas of others. (Though see Robin Hanson.) In conversation, we’re often faced with the choice between making some claim of our own, and reacting (critically or otherwise) to a claim of our interlocutor’s. It seems to me that people too often choose the former option. Many conversations are effectively “duologues”, where people take turns at having monologues that are only vaguely related to each other. They’re “waiting to speak” about the things that interest them, and don’t listen very carefully to what the other person has to say." (Continues) https://lnkd.in/dH4XiEDj
Egocentric epistemics: underdeference and underengagement
stefanschubert.substack.com
To view or add a comment, sign in
-
"In the nature of things, individuals usually have less influence over events than larger groups. However, there are some cases where that’s not true—where individuals unilaterally can take a course of action, and where the effect is just the same as if a whole group had taken it. A salient example is the dissemination of dangerous knowledge. Suppose that a group of scientists have discovered how to create a dangerous virus, and there is the question of whether to publish the results. In this case, an individual scientist publishing the results unilaterally leads to precisely the same outcome as the group making a collective decision. Nick Bostrom, Tom Douglas, and Anders Sandberg call this type of situation “the unilateralist’s curse”, and discuss a range of cases where it applies, such as geoengineering and the spread of genetically modified organisms. In many of these cases, individual players acting unilaterally plausibly leads to harmful outcomes (though that need not always be the case). That doesn’t mean, however, that individual players acting unilaterally always have selfish or nefarious preferences. On the contrary, they can be perfectly altruistically motivated. Idiosyncratic beliefs about outcomes can be sufficient for unilateral action. This makes the unilateralist’s curse particularly tricky to avoid. Though Bostrom et al. primarily (but not exclusively) discuss cases related to catastrophic risks, the unilateralist’s curse is ubiquitous in all sorts of domains. An example I’m interested in is meetings." (Continues) https://lnkd.in/dX4P_u5u
The unilateralist’s curse prolongs meetings
stefanschubert.substack.com
To view or add a comment, sign in
-
"I’ve previously argued that many proverbs and bon mots don’t make much sense. For instance, I’ve argued that it’s typically not the case that A little knowledge is a dangerous thing, but that the benefits of knowledge tend to rise monotonically. I think there are many such cases. But the Norwegian scholar Jon Elster has an alternative view of proverbs and bon mots. Elster notes that there are many pairs of proverbs that at first glance seem to contradict each other, such as Out of sight, out of mind and Absence makes the heart grow fonder. However, in Elster’s view, they are not to be interpreted as statements about the net effects of being apart. Instead, they are statements about the gross effect of individual mechanisms. They are pro tanto-claims, not all things considered-claims. Being apart can trigger several mechanisms: it can make you forget about them (thereby lowering your interest), but it can also make you idealise them (thereby increasing your interest). Which of these mechanisms is stronger depends on the details of the case at hand; on which mechanism is triggered the most. (Elster notes that La Rouchefoucauld argued that Absence lessens moderate passions and intensifies great ones, as the wind blows out a candle but fans up a fire.) This is all characteristically clever and insightful from Elster. However, I’m not sure it correctly describes how people actually use proverbs. People conflate pro tanto-claims and all things considered-claims all the time. As far as I can tell, people who use Out of sight, out of mind are typically not thinking “but this is just one mechanism out of several, and it’s possible that countervailing mechanisms are stronger”. Rather, it seems to me that they are usually oblivious of additional mechanisms. They use their wise proverb as a conversation halter that settles the issue, and are not as epistemically virtuous as Elster’s analysis suggests. My experience is that people often utter proverbs in a self-satisfied tone that is antithetical to the Scout Mindset." (Continues)
Proverbs and irrationality
stefanschubert.substack.com
To view or add a comment, sign in
-
A video of my EAGx Cambridge talk on virtues for effective altruists just went up. I argued that: 1. Effective altruists should cultivate virtues. Effective altruism is usually couched in terms of what actions to take: what charity to donate to, and what job to choose. But we often take actions that we wouldn’t endorse under reflection, because we’re affected by temptations, biases, and other psychological obstacles. To overcome these obstacles, we need to cultivate relevant virtues. 2. Specifically, effective altruists should both cultivate common sense-virtues like honesty, integrity, and kindness, and virtues that effective altruists emphasise more than others do. I discuss six such virtues: effectiveness-focus, truth-seeking, collaborativeness, determination, moderate altruism, and moral expansiveness. See also this paper by Lucius Caviola and myself: https://lnkd.in/dAXRvTRF https://lnkd.in/dZY7nEye
Virtues for effective altruists
stefanschubert.substack.com
To view or add a comment, sign in
-
I'm starting a new Substack today: https://lnkd.in/dQyZU234 Please subscribe. This first post consists of a list of sociological takes on things that are overrated or underrated, in the style of Tyler Cowen. I’ve written about these things before, and as such it provides an overview of much of my thinking about politics and society. https://lnkd.in/dSx6N5BP
Stefan’s Substack | Stefan Schubert | Substack
stefanschubert.substack.com
To view or add a comment, sign in
-
Derek Parfit and the implications of extraordinary talent — Stefan Schubert
stefanfschubert.com
To view or add a comment, sign in
-
An important question for people focused on AI risk, and indeed for anyone trying to influence the world, is: how centralised is power? Are there dominant actors that wield most of the power, or is it more equally distributed? We can ask this question on two levels: On the national level, how powerful is the central power—the government—relative to smaller actors, like private companies, nonprofits, and individual people? On the global level, how powerful are the most powerful countries—in particular, the United States—relative to smaller countries? I think there are some common heuristics that lead people to think that power is more decentralised than it is, on both of these levels. One of these heuristics is what we can call “extrapolation from normalcy”: Extrapolation from normalcy: the view that an actor seeming to have power here and now (in relatively normal times) is a good proxy for it having power tout court. It’s often propped up by a related assumption about the epistemology of power: Naive behaviourism about power (naive behaviourism, for short): the view that there is a direct correspondence between an actor’s power and the official and easily observable actions it takes. In other words, if an actor is powerful, then that will be reflected by official and easily observable actions, like widely publicised company investments or official government policies. (Continues) https://lnkd.in/dBmvs_sq
Crises reveal centralisation — Stefan Schubert
stefanfschubert.com
To view or add a comment, sign in
-
"Thus, across multiple domains, motives often come apart from that of behaviours and outcomes. Negatively coded motives often lead to bad behaviours and bad outcomes, while positively coded motives often lead to good behaviours and good outcomes. And yet people often err on this point, thinking that the valence of the motive must correspond to the valence of the behaviour and the outcome. Why is that?" https://lnkd.in/edGJHbkq
To view or add a comment, sign in
-
A few years ago, John Salvatier wrote a great article called "Reality has a surprising amount of detail". He argues that everything from stairs to computer programs to boiling water is far more complex than it initially seems. We often underrate how detailed things are, in a way that can set us back. I think these are good points, and want to give an explanation. Why is reality so surprisingly detailed? https://lnkd.in/es277Wb6
Why reality appears surprisingly detailed — Stefan Schubert
stefanfschubert.com
To view or add a comment, sign in