Random Posts

Gary Taubes: Is Sugar Toxic?

In Robert Lustig’s view, sugar should be thought of, like cigarettes and alcohol, as something that’s killing us. But can sugar possibly be as bad as Lustig says?

Lustig’s argument is that sugar has unique characteristics, specifically in the way the human body metabolizes the fructose in it, that may make it singularly harmful, at least if consumed in sufficient quantities.

Gary Taubes writes in the New York Times:

The first symptom doctors are told to look for in diagnosing metabolic syndrome is an expanding waistline. This means that if you’re overweight, there’s a good chance you have metabolic syndrome, and this is why you’re more likely to have a heart attack or become diabetic (or both) than someone who’s not. Although lean individuals, too, can have metabolic syndrome, and they are at greater risk of heart disease and diabetes than lean individuals without it.

Having metabolic syndrome is another way of saying that the cells in your body are actively ignoring the action of the hormone insulin — a condition known technically as being insulin-resistant. Because insulin resistance and metabolic syndrome still get remarkably little attention in the press (certainly compared with cholesterol), let me explain the basics.

You secrete insulin in response to the foods you eat — particularly the carbohydrates — to keep blood sugar in control after a meal. When your cells are resistant to insulin, your body (your pancreas, to be precise) responds to rising blood sugar by pumping out more and more insulin. Eventually the pancreas can no longer keep up with the demand or it gives in to what diabetologists call “pancreatic exhaustion.” Now your blood sugar will rise out of control, and you’ve got diabetes.

Not everyone with insulin resistance becomes diabetic; some continue to secrete enough insulin to overcome their cells’ resistance to the hormone. But having chronically elevated insulin levels has harmful effects of its own — heart disease, for one. A result is higher triglyceride levels and blood pressure, lower levels of HDL cholesterol (the “good cholesterol”), further worsening the insulin resistance — this is metabolic syndrome.

When physicians assess your risk of heart disease these days, they will take into consideration your LDL cholesterol (the bad kind), but also these symptoms of metabolic syndrome. The idea, according to Scott Grundy, a University of Texas Southwestern Medical Center nutritionist and the chairman of the panel that produced the last edition of the National Cholesterol Education Program guidelines, is that heart attacks 50 years ago might have been caused by high cholesterol — particularly high LDL cholesterol — but since then we’ve all gotten fatter and more diabetic, and now it’s metabolic syndrome that’s the more conspicuous problem.

This raises two obvious questions. The first is what sets off metabolic syndrome to begin with, which is another way of asking, What causes the initial insulin resistance?

Sugar scares me too, obviously. I’d like to eat it in moderation. I’d certainly like my two sons to be able to eat it in moderation, to not overconsume it, but I don’t actually know what that means, and I’ve been reporting on this subject and studying it for more than a decade. If sugar just makes us fatter, that’s one thing. We start gaining weight, we eat less of it. But we are also talking about things we can’t see — fatty liver, insulin resistance and all that follows. Officially I’m not supposed to worry because the evidence isn’t conclusive, but I do.

***

If you’re still curious, check out what really makes us fat.

Daniel Dennett: Intuition Pumps and Other Tools for Thinking

After reading How to Make Mistakes and How to Criticize with Kindness, a reader sent in a link to this video of Philosopher Daniel Dennett speaking at Google. Dennett is the author of the faboulous Intuition Pumps and Other Tools for Thinking.

Dennett deploys his thinking tools to gain traction on … thorny issues while offering readers insight into how and why each tool was built. Alongside well-known favorites like Occam’s Razor and reductio ad absurdum lie thrilling descriptions of Dennett’s own creations: Trapped in the Robot Control Room, Beware of the Prime Mammal, and The Wandering Two-Bitser. Ranging across disciplines as diverse as psychology, biology, computer science, and physics, Dennett’s tools embrace in equal measure light-heartedness and accessibility as they welcome uninitiated and seasoned readers alike. As always, his goal remains to teach you how to “think reliably and even gracefully about really hard questions.”

The Lie Detector Paradox

Vaughan Bell, of Mindhacks, wrote an interesting piece in the Guardian on ‘Lie Detectors.’ Although highly fallible, suspects are more likely to tell the truth when wired up to a machine. Bell goes on to explore why and whether this means we should trust the results.

“It turns out,” Bell writes, “that polygraphs have a sort of placebo effect, where people are more truthful because they believe that they work. In fact, studies show that people are more truthful when wired up to a completely bogus ‘lie detector’ look-alike.”

The name “lie detector” is misleading in many ways. First, the polygraph doesn’t actually detect lies but, instead, measures arousal. It is based on the idea that we will be a little more stressed, with fleeting changes in blood pressure, sweat gland activation and respiration, when answering questions with lies compared to giving truthful responses. The majority of tests involve comparing responses to control questions that the interviewee will respond truthfully to (“Are you sitting in a chair?”) with responses to investigation-relevant questions (“Did you handle the money?”).

The “lie detection” part comes from an interpretation of the differences in arousal between these types of answers. But physiological differences may arise for many reasons, not just from intentional deception – I may become more stressed if I worry that I won’t be believed, or if the question concerns something that is naturally arousing – perhaps even just a question that contains highly emotional words.

Because there is no pattern of arousal that is unique to deception, the decision to classify a set of responses as untruthful is inevitably a leap from the shaky ground of ambiguous data into the fog of inference. As a result, techniques to “beat” a polygraph are simple and effective. The simplest strategy seems to be to increase arousal during the control questions, rather than trying to reduce arousal during deception, to eliminate any difference.

Bell concludes:

So if it leads to more information, shouldn’t the police be using it? For those who want to base offender monitoring on a technique that relies on ignorance for its validity, it is unfortunate that none of these details is secret as they’ve been discussed openly in scientific and lay forums for years. Any form of risk management that relies on an offender not knowing about Google is inherently flawed, but perhaps more importantly, we have a responsibility to ensure that the police are not basing public safety on methods that are so easily fooled.

Continue Reading

Still curious? Reviews of the scientific evidence by the National Research Council in the US and the British Psychological Society in the UK have indicated that the polygraph has an accuracy of about 85% when evaluating genuinely guilty people. “Unfortunately,” says Bell, “the accuracy is probably nearer to 50% (with results here varying greatly across studies) when attempting to do the same with genuinely innocent people.”

The Inside View and Making Better Decisions

When we don’t think about the process we use to make decisions, they tend to get worse over time as we fail to learn from experience. Often, we make decisions based on the information that is easiest to access. Let’s learn how to take the outside view and make better decisions.

In his book Think Twice: Harnessing the Power of Counterintuition, Michael Mauboussin discusses how we can “fall victim to simplified mental routines that prevent us from coping with the complex realities inherent in important judgment calls.” One of those routines is the inside view, which we’re going to talk about in this article but first let’s get a bit of context.

No one wakes up thinking, “I am going to make bad decisions today.” Yet we all make them. What is particularly surprising is some of the biggest mistakes are made by people who are, by objective standards, very intelligent. Smart people make big, dumb, and consequential mistakes.

[…]

Mental flexibility, introspection, and the ability to properly calibrate evidence are at the core of rational thinking and are largely absent on IQ tests. Smart people make poor decisions because they have the same factory settings on their mental software as the rest of us, and that software isn’t designed to cope with many of today’s problems.

We don’t spend enough time thinking and learning from the process. Generally, we’re pretty ambivalent about the process by which we make decisions.

… typical decision makers allocate only 25 percent of their time to thinking about the problem properly and learning from experience. Most spend their time gathering information, which feels like progress and appears diligent to superiors. But information without context is falsely empowering.

That reminds me of what Daniel Kahneman wrote in Thinking, Fast and Slow:

A remarkable aspect of your mental life is that you are rarely stumped … The normal state of your mind is that you have intuitive feelings and opinions about almost everything that comes your way. You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why; you feel that an enterprise is bound to succeed without analyzing it.

Context comes from broad understanding — looking at the problem from the outside in and not the inside out. When we make a decision, we’re not really gathering and contextualizing information as much as trying to satisfice our existing intuition; The very thing a good decision process should help root out. Think about it this way, every time you make a decision, you’re saying you understand something. Most of us stop there. But understanding is not enough; you need to test that your understanding is correct, which comes through feedback and reflection. Then you need to update your understanding. This is the learning loop.

So why are we so quick to assume we understand?

Ego Induced Blindness

We tend to favor the inside view over the outside view.

An inside view considers a problem by focusing on the specific task and by using information that is close at hand, and makes predictions based on that narrow and unique set of inputs. These inputs may include anecdotal evidence and fallacious perceptions. This is the approach that most people use in building models of the future and is indeed common for all forms of planning.

[…]

The outside view asks if there are similar situations that can provide a statistical basis for making a decision. Rather than seeing a problem as unique, the outside view wants to know if others have faced comparable problems and, if so, what happened. The outside view is an unnatural way to think, precisely because it forces people to set aside all the cherished information they have gathered.

When the inside view is more positive than the outside view, you’re saying (knowingly or, more likely, unknowingly) that this time is different. Our brains are all too happy to help us construct this argument.

Mauboussin argues that we embrace the inside view for a few primary reasons. First, we’re optimistic by nature. Second, is the “illusion of optimism” (we see our future as brighter than that of others). Finally, it is the illusion of control (we think that chance events are subject to our control).

One interesting point is that while we’re bad at looking at the outside view when it comes to ourselves, we’re better at it when it comes to other people.

In fact, the planning fallacy embodies a broader principle. When people are forced to look at similar situations and see the frequency of success, they tend to predict more accurately. If you want to know how something is going to turn out for you, look at how it turned out for others in the same situation. Daniel Gilbert, a psychologist at Harvard University, ponders why people don’t rely more on the outside view, “Given the impressive power of this simple technique, we should expect people to go out of their way to use it. But they don’t.” The reason is most people think of themselves as different, and better, than those around them.

So it’s mostly ego. I’m better than the people tackling this problem before me. We see the differences between situations and use those as rationalizations as to why things are different this time.

Consider this:

We incorrectly think that differences are more valuable than similarities.

After all, anyone can see what’s the same but it takes true insight to see what’s different, right? We’re all so busy trying to find differences that we forget to pay attention to what is the same.

Incorporating the Outside View

In Think Twice, Mauboussin distills the work of Kahneman and Tversky into four steps and adds some commentary.

1. Select a Reference Class

Find a group of situations, or a reference class, that is broad enough to be statistically significant but narrow enough to be useful in analyzing the decision that you face. The task is generally as much art as science, and is certainly trickier for problems that few people have dealt with before. But for decisions that are common—even if they are not common for you— identifying a reference class is straightforward. Mind the details. Take the example of mergers and acquisitions. We know that the shareholders of acquiring companies lose money in most mergers and acquisitions. But a closer look at the data reveals that the market responds more favorably to cash deals and those done at small premiums than to deals financed with stock at large premiums. So companies can improve their chances of making money from an acquisition by knowing what deals tend to succeed.

2. Assess the distribution of outcomes.

Once you have a reference class, take a close look at the rate of success and failure. … Study the distribution and note the average outcome, the most common outcome, and extreme successes or failures.

[…]

Two other issues are worth mentioning. The statistical rate of success and failure must be reasonably stable over time for a reference class to be valid. If the properties of the system change, drawing inference from past data can be misleading. This is an important issue in personal finance, where advisers make asset allocation recommendations for their clients based on historical statistics. Because the statistical properties of markets shift over time, an investor can end up with the wrong mix of assets.

Also keep an eye out for systems where small perturbations can lead to large-scale change. Since cause and effect are difficult to pin down in these systems, drawing on past experiences is more difficult. Businesses driven by hit products, like movies or books, are good examples. Producers and publishers have a notoriously difficult time anticipating results, because success and failure is based largely on social influence, an inherently unpredictable phenomenon.

3. Make a prediction.

With the data from your reference class in hand, including an awareness of the distribution of outcomes, you are in a position to make a forecast. The idea is to estimate your chances of success and failure. For all the reasons that I’ve discussed, the chances are good that your prediction will be too optimistic.

Sometimes when you find the right reference class, you see the success rate is not very high. So to improve your chance of success, you have to do something different than everyone else.

4. Assess the reliability of your prediction and fine-tune.

How good we are at making decisions depends a great deal on what we are trying to predict. Weather forecasters, for instance, do a pretty good job of predicting what the temperature will be tomorrow. Book publishers, on the other hand, are poor at picking winners, with the exception of those books from a handful of best-selling authors. The worse the record of successful prediction is, the more you should adjust your prediction toward the mean (or other relevant statistical measure). When cause and effect is clear, you can have more confidence in your forecast.

***

The main lesson we can take from this is that we tend to focus on what’s different whereas the best decisions often focus on just the opposite: what’s the same. While this situation seems a little different, it’s almost always the same.

As Charlie Munger has said: “if you notice, the plots are very similar. The same plot comes back time after time.”

Particulars may vary but, unless those particulars are the variables that govern the outcome of the situation, the pattern remains. If we’re going to focus on what’s different rather than what’s the same, you’d best be sure the variables you’re clinging to matter.

Article Summary

  • You can reduce the number of mistakes you make by thinking about problems more clearly.
  • Most decision-makers don’t spend enough time on the process of making decisions or learning from their mistakes.
  • Feedback and reflection are necessary components to learn from experience.
  • When you think this time is different, you’re saying you will succeed where others have failed.
  • To better incorporate a broader view, you can select a reference class, assess the distribution of outcomes, make a prediction, and calibrate your accuracy.
  • We tend to focus on what’s different, whereas many of the best decisions focus on what’s the same.

 

Why Privacy Matters Even if You Have ‘Nothing to Hide’

Daniel Solove, author of Nothing to Hide: The False Tradeoff between Privacy and Security, argues that privacy matters even if you have nothing to hide.

The nothing-to-hide argument pervades discussions about privacy. The data-security expert Bruce Schneier calls it the “most common retort against privacy advocates.” The legal scholar Geoffrey Stone refers to it as an “all-too-common refrain.” In its most compelling form, it is an argument that the privacy interest is generally minimal, thus making the contest with security concerns a foreordained victory for security.

“If you’ve got nothing to hide, you’ve got nothing to fear.” While flawed, that argument is not new. It appears in Henry James’s 1888 novel, The Reverberator:

If these people had done bad things they ought to be ashamed of themselves and he couldn’t pity them, and if they hadn’t done them there was no need of making such a rumpus about other people knowing.

In the end, the nothing-to-hide argument has nothing to say.

When the nothing-to-hide argument is unpacked, and its underlying assumptions examined and challenged, we can see how it shifts the debate to its terms, then draws power from its unfair advantage. The nothing-to-hide argument speaks to some problems but not to others. It represents a singular and narrow way of conceiving of privacy, and it wins by excluding consideration of the other problems often raised with government security measures. When engaged directly, the nothing-to-hide argument can ensnare, for it forces the debate to focus on its narrow understanding of privacy. But when confronted with the plurality of privacy problems implicated by government data collection and use beyond surveillance and disclosure, the nothing-to-hide argument, in the end, has nothing to say.

The Copernican Principle: How To Predict Everything

An old (1999) New Yorker article introduces us to J. Richard Gott III, a Princeton astrophysicist and some of his ideas on prediction. The core idea is that — despite what we’d like — we are not that special. So when we encounter something, we are unlikely to be doing so at a special time in its life. This is the Copernican Principle.

“On May 27, 1993, I looked up all the plays that were listed in The New Yorker—Broadway and Off Broadway plays and musicals—and called up each of the theatres and asked when each play had opened,” Gott recalls. “I predicted how long each would run, based solely on how long it had been running already. Forty-four shows were playing at the time. So far, thirty-six of them have closed, all in agreement with my predictions of how long they would last. And the others, which are still running, are also within the range I’d predicted.”

It must be said that Gott’s predictions are, well, broad. He predicted, for instance, that “Marisol,” which had been open for a week when he called the theatres, would close in less than thirty-nine weeks; it lasted 10 more days. To “Cats,” which had then been running for three thousand eight hundred and eighty-five days, Gott assigned a longevity of not less than a hundred days and not more than four hundred and fourteen years.

The significance of Gott’s approach rests in its competence in addressing issues previously inaccessible to scientific inquiry, such as, say, trying to predict how long the human species will endure.

“As time goes on, you’ll understand. What lasts, lasts; what doesn’t, doesn’t. Time solves most things. And what time can’t solve, you have to solve yourself.”

― Haruki Murakami

“My approach is based on the Copernican principle, which has been one of the most famous and successful scientific hypotheses in the history of science,” Gott said. “It’s named after Nicolaus Copernicus, who proved that the earth is not the center of the universe; and it’s simply the idea that your location is not special. The more we’ve learned about the universe, the more non-special our location has looked. The earth is orbiting an ordinary star in an ordinary galaxy. The reason the Copernican principle works is that, of all the places for intelligent observers to be, there are, by definition, only a few special places and many non-special places. So you’re simply more likely to be in one of the many non-special places.”

The predictions that I make are based on applying this principle to time. I first thought of it in 1969. I’d just graduated from Harvard and was traveling around Europe, and I visited the Berlin Wall. People at the time wondered how long the Wall might last. Was it a temporary aberration, or a permanent fixture of modern Europe? Standing at the Wall in 1969, I made the following argument, using the Copernican principle. I said, Well, there’s nothing special about the timing of my visit. I’m just travelling—you know, Europe on five dollars a day—and I’m observing the Wall because it happens to be here. My visit is random in time. So if I divide the Wall’s total history, from the beginning to the end, into four quarters, and I’m located randomly somewhere in there, there’s a fifty-percent chance that I’m in the middle two quarters—that means, not in the first quarter and not in the fourth quarter.

Let’s suppose that I’m at the beginning of that middle fifty percent. In that case, one-quarter of the Wall’s ultimate history has passed, and there are three-quarters left in the future. In that case, the future’s three times as long as the past. On the other hand, if I’m at the other end, then three-quarters have happened already, and there’s one-quarter left in the future. In that case, the future is one-third as long as the past.

The Wall was 8 years old at the time. “So I said to a friend, ‘There’s a fifty-percent chance that the Wall’s future duration will be between two-thirds of a year (I believe this should be two and two-thirds of a year – i.e. 1/3 of 8) and twenty-four years.’ Twenty years later, in 1989, the Wall came down, within those two limits that I had predicted. I thought, maybe I should write this up.”

Recently, it’s come to be understood that systems may behave chaotically and therefore be unpredictable. You know, a butterfly in the Amazon can affect the weather thousands of miles away, that sort of thing. This has led some people to say that predicting the future of complex systems is impossible. Which is true if you are concerned with the precise specifics. To predict the name of the President of the United States in the year 2085, for instance, is impossible. But if you ask the right question maybe you can get an interesting answer.

As for the question of how long the human species will last Gott offers some wise words.

When the author of the New Yorker article, Timothy Ferris, asked his friends how long humans would last, “most people predicted either that humans beings will last less than two hundred years or that we’re good for more than ten million years.” To which Gott responded, “That’s because people like to think they’re living in special times. We like to think of ourselves as near the beginning of things, or in an apocalyptic situation near the end. It’s more dramatic that way. A lot of people might say, ‘Oh, but we are in a special epoch. We’re in the epoch when men first went to the mood, when we discovered genetic engineering, nuclear energy, and so forth.’ My answer to this is that the Copernican principle predicts that you will be living in a high-population century—most people do, just as most people come from cities with higher than average populations, in larger than average nations. It’s people who make discoveries, so if you live when there are more people around, you should expect to live in an age when a lot of interesting discoveries are being made.”

***

Still curious? Gott is the author of Time Travel in Einstein’s Universe and Sizing Up the Universe: The Cosmos in Perspective.

Defending a New Domain: The Pentagon’s Cyberstrategy

As someone interested in how the weak win wars, I found this article (pdf), by William Lynn, in the recent Foreign Affairs utterly fascinating.

…cyberwarfare is asymmetric. The low cost of computing devices means that U.S. adversaries do not have to build expensive weapons, such as stealth fighters or aircraft carriers, to pose a significant threat to U.S. military capabilities. A dozen determined computer programmers can, if they find a vulnerability to exploit, threaten the United States’ global logistics network, steal its operational plans, blind its intelligence capabilities, or hinder its ability to deliver weapons on target. Knowing this, many militaries are developing offensive capabilities in cyberspace, and more than 100 foreign intelligence organizations are trying to break into U.S. networks. Some governments already have the capacity to disrupt elements of the U.S. information infrastructure.

In cyberspace, the offense has the upper hand. The Internet was designed to be collaborative and rapidly expandable and to have low barriers to technological innovation; security and identity management were lower priorities. For these structural reasons, the U.S. government’s ability to defend its networks always lags behind its adversaries’ ability to exploit U.S. networks’ weaknesses. Adept programmers will find vulnerabilities and overcome security measures put in place to prevent intrusions. In an offense-dominant environment, a fortress mentality will not work. The United States cannot retreat behind a Maginot Line of firewalls or it will risk being overrun. Cyberwarfare is like maneuver warfare, in that speed and agility matter most. To stay ahead of its pursuers, the United States must constantly adjust and improve its defenses.

It must also recognize that traditional Cold War deterrence models of assured retaliation do not apply to cyberspace, where it is difficult and time consuming to identify an attack’s perpetrator. Whereas a missile comes with a return address, a computer virus generally does not. The forensic work necessary to identify an attacker may take months, if identification is possible at all. And even when the attacker is identified, if it is a nonstate actor, such as a terrorist group, it may have no assets against which the United States can retaliate. Furthermore, what constitutes an attack is not always clear. In fact, many of today’s intrusions are closer to espionage than to acts of war. The deterrence equation is further muddled by the fact that cyberattacks often originate from co-opted servers in neutral countries and that responses to them could have unintended consequences.

Nate Silver: The Difference Between Risk and Uncertainty

Nate Silver elaborates on the difference between risk and uncertainty in The Signal and the Noise:

Risk, as first articulated by the economist Frank H. Knight in 1921, is something that you can put a price on. Say that you’ll win a poker hand unless your opponent draws to an inside straight: the chances of that happening are exactly 1 chance in 11. This is risk. It is not pleasant when you take a “bad beat” in poker, but at least you know the odds of it and can account for it ahead of time. In the long run, you’ll make a profit from your opponents making desperate draws with insufficient odds.

Uncertainty, on the other hand, is risk that is hard to measure. You might have some vague awareness of the demons lurking out there. You might even be acutely concerned about them. But you have no real idea how many of them there are or when they might strike. Your back-of-the-envelope estimate might be off by a factor of 100 or by a factor of 1,000; there is no good way to know. This is uncertainty. Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt.

Don’t Let Your (Technology) Tools Use You

“In an information-rich world, the wealth of information means a dearth of something else:
a scarcity of whatever it is that information consumes.
What information consumes is rather obvious: it consumes the attention of its recipients.
Hence a wealth of information creates a poverty of attention and a need to allocate
that attention efficiently among the overabundance of information sources that might consume it.”
Herbert Simon

***

A shovel is just a shovel. You shovel things with it. You can break up weeds and dirt. (You can also whack someone with it.) I’m not sure I’ve seen a shovel used for much else.

Modern technological tools aren’t really like that.

What is an iPhone, functionally? Sure, it’s got the phone thing down, but it’s also a GPS, a note-taker, an emailer, a text messager, a newspaper, a video-game device, a taxi-calling service, a flashlight, a web browser, a library, a book…you get the point. It does a lot.

This all seems pretty wonderful. To perform those functions 20 years ago, you needed a map and a sense of direction, a notepad, a personal computer, a cell phone, an actual newspaper, a Playstation, a phone and the willingness to talk to a person, an actual flashlight, an actual library, an actual book…you get the point. As Mark Andreessen puts it, the world is being eaten by software. One simple (looking) device and a host of software can perform the functions served by a bunch of big clunky tools of the past.

So far, we’ve been convinced that use of the New Tools is mostly “upside,” that our embrace of them should be wholehearted. Much of this is for good reason. Do you remember how awful using a map was? Yuck.

The problem is that our New Tools are winning the battle of attention. We’ve gotten to the point where the tools use us as much as we use them. This new reality means we need to re-examine our relationship with our New Tools.

Don't Let Your Tools Use You

Down the Rabbit Hole

Here’s a typical situation.

You’re on your computer finishing the client presentation you have to give in two days. Your phone lights up and makes a chimney noise — you’ve got a text message. “Hey, have you seen that new Dracula movie?” asks your friend. It only takes a few messages before the two of you begin to disagree on whether Transylvania is actually a real place. Off to Google!

After a few quick clicks, you get to Wikipedia, which tells you that yes, Transylvania is a region of Romania which the author Bram Stoker used as Count Dracula’s birthplace. Reading the Wikipedia entry costs you about 20 minutes. As you read, you find out that Bram Stoker was actually Irish. Irish! An Irish guy wrote Dracula? How did I not know this? Curiosity stoked, you look up Irish novelists, the history of Gothic literature, the original vampire stories…down and down the rabbit hole you go.

Eventually your thirst for trivia is exhausted, and you close the Wikipedia tab to text your friend how wrong they are in regards to Transylvania. You click the Home button to leave your text conversation, which lets you see the Twitter icon. I wonder how many people retweeted my awesome joke about ventriloquism? You pull it up and start “The Scroll.” Hah! Greg is hilarious. Are you serious, Bill Gates? Damn — I wish I read as much as Shane Parrish. You go and go. Your buddy tweets a link to an interesting-looking article about millennials — “10 Ways Millennials are Ruining the Workplace”. God, they are so self-absorbed. Click.

You decide to check Facebook and see if that girl from the cocktail party on Friday commented on your status. She didn’t, but Wow, Susanne went to Hawaii? You look at 35 pictures Susanne posted in her first three hours in Hawaii. Wait, who’s that guy she’s with? You click his name and go to his Facebook page. On down the rabbit hole you fall…

Now it’s been two hours since you left your presentation to respond to the text message, and you find yourself physically tired from the rapid scanning and clicking, scanning and clicking, scanning and clicking of the past two hours. Sad, you go get a coffee, go for a short walk, and decide: Now, I will focus. No more distraction.

Ten minutes in, your phone buzzes. That girl from the cocktail party commented on your status…

Attention for Sale

We’ve all been there. When we come up for air, it can feel like the aftermath of a mob crowd. What did I just do?

The tools we’re now addicted to have been engineered for a simple purpose: To keep us addicted to them. The service they provide is secondary to the addiction. Yes, Facebook is a networking tool. Yes, Twitter is a communication tool. Yes, Instagram is an excellent food-photography tool. But unless they get us hooked and keep us hooked, their business models are broken.

Don’t believe us?

Take stock of the metrics by which people value or assess these companies. Clicks. Views. Engagement. Return visits. Length of stay. The primary source of value for these products is how much you use them and what they can sell to you while you’re there. Increasing their value is a simple (but not easy) proposition: Either get usage up or figure out more effective ways to sell to you while you’re there.

As Herbert Simon might have predicted, our attention is for sale, and we’re ceding it a little at a time as the tools get better and better at fulfilling their function. There’s a version of natural selection going on, where the only consumer technology products that survive are the enormously addictive ones. The trait which produces maximum fitness is addictiveness itself. If you’re not using a tool constantly, it has no value to advertisers or data sellers, and thus they cannot raise capital to survive. And even if it’s an app or tool that you buy, one that you have to pay money for upfront, they must hook you on Version 1 if you’re going to be expected to buy Versions 2, 3, and 4.

This ecosystem ensures that each generation of consumer tech products – hardware or software – gets better and better at keeping you hooked. These services have learned, through a process of evolution, to drown users in positive feedback and create intense habitual usage. They must – because any other outcome is death. Facebook doesn’t want you to go on once a month to catch up on your correspondence. You must be engaged. The service does not care whether it’s unnecessarily eating into your life.

Snap Back to Reality

It’s up to us to take our lives back then. We must comprehend that the New Tools have a tremendous downside in their loss of focused attention, and that we’re giving it up willingly in a sort of Faustian bargain for entertainment, connectedness, and novelty.

Psychologist Mihaly Csikszentmihalyi pioneered the concept of Flow, where we enter an enjoyable state of rapt attention to our work and produce a high level of creative output. It’s a wonderful feeling, but the New Tools have learned to provide the same sensation without the actual results. We don’t end up with a book, or a presentation, or a speech, or a quilt, or a hand-crafted table. We end up two hours later in the day.

***

The first step towards a solution must be to understand the reality of this new ecosystem.

It follows Garrett Hardin’s “First Law of Ecology”: You can never merely do one thing. The New Tools are not like the Old Tools, where you pick up the shovel, do your shoveling, and then put the shovel back in the garage. The iPhone is not designed that way. It’s designed to keep you going, as are most of the other New Tools. You probably won’t send one text. You probably won’t watch one video. You probably won’t read one article. You’re not supposed to!

The rational response to this new reality depends a lot on who you are and what you need the tools for. Some people can get rid of 50% or more of their New Tools very easily. You don’t have to toss out your iPhone for a StarTAC, but because software is doing the real work, you can purposefully reduce the capability of the hardware by reducing your exposure to certain software.

As you shed certain tools, expect a homeostatic response from your network. Don’t be mistaken: If you’re a Snapchatter or an Instagrammer or simply an avid texter, getting rid of those services will give rise to consternation. They are, after all, networking tools. Your network will notice. You’ll need a bit of courage to face your friends and tell them, with a straight face, that you won’t be Instagramming anymore because you’re afraid of falling down the rabbit hole. But if you’ve got the courage, you’ll probably find that after a week or two of adjustment your life will go on just fine.

The second and more mild type of response would be to appreciate the chain-smoking nature of these products and to use them more judiciously. Understand that every time you look at your iPhone or connect to the Internet, the rabbit hole is there waiting for you to tumble down. If you can grasp that, you’ll realize that you need to be suspicious of the “quick check.” Either learn to batch Internet and phone time into concentrated blocks or slowly re-learn how to ignore the desire to follow up on every little impulse that comes to mind. (Or preferably, do both.)

A big part of this is turning off any sort of “push” notification, which must be the most effective attention-diverter ever invented by humanity. A push notification is anything that draws your attention to the tool without your conscious input. It’s when your phone buzzes for a text message, or an image comes on the screen when you get an email, or your phone tells you that you’ve got a Facebook comment. Anything that desperately induces you to engage. You need to turn them off. (Yes, including text message notifications – your friends will get used to waiting).

E-mail can be the worst offender; it’s the earliest and still one of the most effective digital rabbit holes. To push back, close your email client when you’re not using it. That way, you’ll have to open it to send or read an email. Then go ahead and change the settings on your phone’s email client so you have to “fetch” emails yourself, rather than having them pushed at you. Turn off anything that tells you an email has arrived.

Once you stop being notified by your tools, you can start to engage with them on your own terms and focus on your real work for a change; focus on the stuff actually producing some value in your life and in the world. When the big stuff is done, you can give yourself a half-hour or an hour to check your Facebook page, check your Instagram page, follow up on Wikipedia, check your emails, and respond to your text messages. This isn’t as good a solution as deleting many of the apps altogether, but it does allow you to engage with these tools on your own terms.

However you choose to address the world of New Tools, you’re way ahead if you simply recognize their power over your attention. Getting lost in hyperlinks and Facebook feeds doesn’t mean you’re weak, it just means the tools you’re using are designed, at their core, to help you get lost. Instead of allowing yourself to go to work for them, resolve to make them work for you.

Nassim Taleb: The Winner-Take-All Effect In Longevity

Nassim Taleb elaborates on the Copernican Principle, a concept first introduced on Farnam Street in How To Predict Everything.

For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day implies a longer life expectancy.

So the longer a technology lives, the longer it is expected to live. Let me illustrate the point. Say I have for sole information about a gentleman that he is 40 years old and I want to predict how long he will live. I can look at actuarial tables and find his age-adjusted life expectancy as used by insurance companies. The table will predict that he has an extra 44 to go. Next year, when he turns 41 (or, equivalently, if apply the reasoning today to another person currently 41), he will have a little more than 43 years to go. So every year that lapses reduces his life expectancy by about a year (actually, a little less than a year, so if his life expectancy at birth is 80, his life expectancy at 80 will not be zero, but another decade or so).

The opposite applies to nonperishable items. I am simplifying numbers here for clarity. If a book has been in print for forty years, I can expect it to be in print for another forty years. But, and that is the main difference, if it survives another decade, then it will be expected to be in print another fifty years. This, simply, as a rule, tells you why things that have been around for a long time are not “aging” like persons, but “aging” in reverse. Every year that passes without extinction doubles the additional life expectancy. This is an indicator of some robustness. The robustness of an item is proportional to its life!

This is the “winner-take-all” effect in longevity.

The main argument against this idea is the counterexample — newspapers and traditional telephone lines come to mind. These technologies, widely considered inefficient and dying, have been around for a long time. Yet the Copernican Principle would suggest they will continue to live on for a long time.

These arguments miss the point of probability. The argument is not about a specific example, but rather about the life expectancy, which is, Taleb writes “simply a probabilistically derived average.”

Perhaps an example, from Taleb, will help illustrate. If I were to ask you to guess the life expectancy of the average 40 year old man, you would probably guess around 80 (at least that’s what the actuarial tables likely reveal). However, if I now add that the man is suffering from cancer, we would revisit our decision and most likely revise our estimate downward. “It would,” Taleb writes, “be a mistake to think that he has forty four more years to live, like others in his age group who are cancer-free.”

“In general, the older the technology, not only the longer it is expected to last, but the more certainty I can attach to such statement.”

***

If you liked this, you’ll love these three other Farnam Street articles:

The Copernican Principle: How To Predict Everything — Based on one of the most famous and successful prediction methods in the history of science.

Ten Commandments for Aspiring Superforecasters — The ten key themes that have been “experimentally demonstrated to boost accuracy” in the real-world.

Philip Tetlock on The Art and Science of Prediction — How we can get better at the art and science of prediction, including diving into makes some people better at making predictions and how we can learn to improve our ability to guess the future.

Source