Blog

The Ingredients For Innovation

Inventing new things is hard. Getting people to accept and use new inventions is often even harder. For most people, at most times, technological stagnation has been the norm. What does it take to escape from that and encourage creativity?

***

“Technological progress requires above all tolerance toward the unfamiliar and the eccentric.”

— Joel Mokyr, The Lever of Riches

Writing in The Lever of Riches: Technological Creativity and Economic Progress, economic historian Joel Mokyr asks why, when we look at the past, some societies have been considerably more creative than others at particular times. Some have experienced sudden bursts of progress, while others have stagnated for long periods of time. By examining the history of technology and identifying the commonalities between the most creative societies and time periods, Mokyr offers useful lessons we can apply as both individuals and organizations.

What does it take for a society to be technologically creative?

When trying to explain something as broad and complex as technological creativity, it’s important not to fall prey to the lure of a single explanation. There are many possible reasons for anything that happens, and it’s unwise to believe explanations that are too tidy. Mokyr disregards some of the common simplistic explanations for technological creativity, such as that war prompts creativity or people with shorter life spans are less likely to expend time on invention.

Mokyr explores some of the possible factors that contribute to a society’s technological creativity. In particular, he seeks to explain why Europe experienced such a burst of technological creativity from around 1500 to the Industrial Revolution, when prior to that it had lagged far behind the rest of the world. Mokyr explains that “invention occurs at the level of the individual, and we should address the factors that determine individual creativity. Individuals, however, do not live in a vacuum. What makes them implement, improve and adapt new technologies, or just devise small improvements in the way they carry out their daily work depends on the institutions and the attitudes around them.” While environment isn’t everything, certain conditions are necessary for technological creativity.

He identifies the three following key factors in an environment that impact the occurrence of invention and innovation.

The social infrastructure

First of all, the society needs a supply of “ingenious and resourceful innovators who are willing and able to challenge their physical environment for their own improvement.” Fostering these attributes requires factors like good nutrition, religious beliefs that are not overly conservative, and access to education. It is in part about the absence of negative factors—necessitous people have less capacity for creativity. Mokyr writes: “The supply of talent is surely not completely exogenous; it responds to incentives and attitudes. The question that must be confronted is why in some societies talent is unleashed upon technical problems that eventually change the entire productive economy, whereas in others this kind of talent is either repressed or directed elsewhere.”

One partial explanation for Europe’s creativity from 1500 to the Industrial Revolution is that it was often feasible for people to relocate to a different country if the conditions in their current one were suboptimal. A creative individual finding themselves under a conservative government seeking to maintain the technological status quo was able to move elsewhere.

The ability to move around was also part of the success of the Abbasid Caliphate, an empire that stretched from India to the Iberian Peninsula from about 750 to 1250. Economists Maristella Botticini and Zvi Eckstein write in The Chosen Few: How Education Shaped Jewish History, 70–1492 that “it was relatively easy to move or migrate” within the Abbasid empire, especially with its “common language (Arabic) and a uniform set of institutions and laws over an immense area, greatly [favoring] trade and commerce.”

It also matters whether creative people are channeled into technological fields or into other fields, like the military. In Britain during and prior to the Industrial Revolution, Mokyr considers invention to have been the main possible path for creative individuals, as other areas like politics leaned towards conformism.

The social incentives

Second, there need to be incentives in place to encourage innovation. This is of extra importance for macroinventions – completely new inventions, not improvements on existing technology – which can require a great leap of faith. The person who comes up with a faster horse knows it has a market; the one who comes up with a car does not. Such incentives are most often financial, but not always. Awards, positions of power, and recognition also count. Mokyr explains that diverse incentives encourage the patience needed for creativity: “Sustained innovation requires a set of individuals willing to absorb large risks, sometimes to wait many years for the payoff (if any.)”

Patent systems have long served as an incentive, allowing inventors to feel confident they will profit from their work. Patents first appeared in northern Italy in the early fifteenth century; Venice implemented a formal system in 1474. According to Mokyr, the monopoly rights mining contractors received over the discovery of hitherto unknown mineral resources provided inspiration for the patent system.

However, Mokyr points out that patents were not always as effective as inventors hoped. Indeed, they may have provided the incentive without any actual protection. Many inventors ended up spending unproductive time and money on patent litigation, which in some cases outweighed their profits, discouraged them from future endeavors, or left them too drained to invent more. Eli Whitney, inventor of the cotton gin, claimed his legal costs outweighed his profits. Mokyr proposes that though patent laws may be imperfect, they are, on balance, good for society as they incentivize invention while not altogether preventing good ideas from circulating and being improved upon by others.

The ability to make money from inventions is also related to geographic factors. In a country with good communication and transport systems, with markets in different areas linked, it is possible for something new to sell further afield. A bigger prospective market means stronger financial incentives. The extensive, accessible, and well-maintained trade routes during the Abbasid empire allowed for innovations to diffuse throughout the region. And during the Industrial Revolution in Britain, railroads helped bring developments to the entire country, ensuring inventors didn’t just need to rely on their local market.

The social attitude

Third, a technologically creative society must be diverse and tolerant. People must be open to new ideas and outré individuals. They must not only be willing to consider fresh ideas from within their own society but also happy to take inspiration from (or to outright steal) those coming from elsewhere. If a society views knowledge coming from other countries as suspect or even dangerous, unable to see its possible value, it is at a disadvantage. If it eagerly absorbs external influences and adapts them for its own purposes, it is at an advantage. Europeans were willing to pick up on ideas from each other. and elsewhere in the world. As Mokyr puts it, “Inventions such as the spinning wheel, the windmill, and the weight-driven clock recognized no boundaries”

In the Abbasid empire, there was an explosion of innovation that drew on the knowledge gained from other regions. Botticini and Eckstein write:

“The Abbasid period was marked by spectacular developments in science, technology, and the liberal arts. . . . The Muslim world adopted papermaking from China, improving Chinese technology with the invention of paper mills many centuries before paper was known in the West. Muslim engineers made innovate industrial uses of hydropower, tidal power, wind power, steam power, and fossil fuels. . . . Muslim engineers invented crankshafts and water turbines, employed gears in mills and water-raising machines, and pioneered the use of dams as a source of waterpower. Such advances made it possible to mechanize many industrial tasks that had previously been performed by manual labor.”

Within societies, certain people and groups seek to maintain the status quo because it is in their interests to do so. Mokyr writes that “Some of these forces protect vested interests that might incur losses if innovations were introduced, others are simply don’t-rock-the-boat kind of forces.” In order for creative technology to triumph, it must be able to overcome those forces. While there is always going to be conflict, the most creative societies are those where it is still possible for the new thing to take over. If those who seek to maintain the status quo have too much power, a society will end up stagnating in terms of technology. Ways of doing things can prevail not because they are the best, but because there is enough interest in keeping them that way.

In some historical cases in Europe, it was easier for new technologies to spread in the countryside, where the lack of guilds compensated for the lower density of people. City guilds had a huge incentive to maintain the status quo. The inventor of the ribbon loom in Danzig in 1579 was allegedly drowned by the city council, while “in the fifteenth century, the scribes guild of Paris succeeded in delaying the introduction of printing in Paris by 20 years.”

Indeed, tolerance could be said to matter more for technological creativity than education. As Mokyr repeatedly highlights, many inventors and innovators throughout history were not educated to a high level—or even at all. Up until relatively recently, most technology preceded the science explaining how it actually worked. People tinkered, looking to solve problems and experiment.

Unlike modern times, Mokyr explains, for most of history technology did not emerge from “specialized research laboratories paid for by research and development budgets and following strategies mapped out by corporate planners well-informed by marketing analysts. Technological change occurred mostly through new ideas and suggestions occurring if not randomly, then in a highly unpredictable fashion.”

When something worked, it worked, even if no one knew why or the popular explanation later proved incorrect. Steam engines are one such example. The notion that all technologies function under the same set of physical laws was not standard until Galileo. People need space to be a bit weird.

Those who were scientists and academics during some of Europe’s most creative periods worked in a different manner than what we expect today, often working on the practical problems they faced themselves. Mokyr gives Galileo as an example, as he “built his own telescopes and supplemented his salary as a professor at the University of Padua by making and repairing instruments.” The distinction between one who thinks and one who makes was not yet clear at the time of the Renaissance. Wherever and whenever making has been a respectable activity for thinkers, creativity flourishes.

Seeing as technological creativity requires a particular set of circumstances, it is not the norm. Throughout history, Mokyr writes, “Technological progress was neither continuous nor persistent. Genuinely creative societies were rare, and their bursts of creativity usually short-lived.”

Not only did people need to be open to new ideas, they also needed to be willing to actually start using new technologies. This often required a big leap of faith. If you’re a farmer just scraping by, trying a new way of ploughing your fields could mean starving to death if it doesn’t work out. Innovations can take a long time to defuse, with riskier ones taking the longest.

How can we foster the right environment?

So what can we learn from The Lever of Riches that we can apply as individuals and in organizations?

The first lesson is that creativity does not occur in a vacuum. It requires certain necessary conditions to occur. If we want to come up with new ideas as individuals, we should consider ourselves as part of a system. In particular, we need to consider what might impede us and what can encourage us. We need to eradicate anything that will get in the way of our thinking, such as limiting beliefs or lack of sleep.

We need to be clear on what motivates us to be creative, ensuring what we endeavor to do will be worthwhile enough to drive us through the associated effort. When we find ourselves creatively blocked, it’s often because we’re not in touch with what inspires us to create in the first place.

Within an organization, such factors are equally important. If you want your employees to be creative, it’s important to consider the system they’re part of. Is there anything blocking their thinking? Is a good incentive structure in place (bearing in mind incentives are not solely financial)?

Another lesson is that tolerance for divergence is essential for encouraging creativity. This may seem like part of the first lesson, but it’s crucial enough to consider in isolation.

As individuals, when we seek to come up with new ideas, we need to ask ourselves the following questions: Am I exposing myself to new material and inspirations or staying within a filter bubble? Am I open to unusual ways of thinking? Am I spending too much time around people who discourage deviation from the status quo? Am I being tolerant of myself, allowing myself to make mistakes and have bad ideas in service of eventually having good ones? Am I spending time with unorthodox people who encourage me to think differently?

Within organizations, it’s worth asking the following questions: Are new ideas welcomed or shot down? Is it in the interests of many to protect the status quo? Are ideas respected regardless of their source? Are people encouraged to question norms?

A final lesson is that the forces of inertia are always acting to discourage creativity. Invention is not the natural state of things—it is an exception. Technological stagnation is the norm. In most places, at most times, people have not come up with new technology. It takes a lot for individuals to be willing to wrestle something new from nothing or to question if something in existence can be made better. But when those acts do occur, they can have an immeasurable impact on our world.

Thinking For Oneself

When I was young, I thought other people could give me wisdom. Now that I’m older, I know this isn’t true.

Wisdom is earned, not given. When other people give us the answer, it belongs to them and not us. While we might achieve the outcome we desire, it comes from dependence, not insight. Instead of thinking for ourselves, we’re dependent on the insight of others.

There is nothing wrong with buying insight, this is one way we leverage ourselves. The problem is when we assume the insight of others is our own.

Earning insight requires going below the surface. Most of us want to shy away from the details and complexity. It takes a while. It’s boring. It’s mental work.

Yet it is only by jumping into the complexity that we can really discover simplicity for ourselves.

While the abundant directives, rules, and simplicities offered by others make us feel like we’re getting smarter, it’s nothing more than the illusion of knowledge.

If wisdom was as simple to acquire as reading, we’d all be wealthy and happy. Others help you but they can’t do the work for you. Owning wisdom for oneself requires a discipline the promiscuous consumer of it does not share.

Perhaps an example will help. The other day a plumber came to repair a pipe. He fixed the problem in under 5 minutes. The mechanical motions are easy to replicate. In fact, while it would take me longer, the procedure was so simple if you watched him you’d be able to do it. However, if even one thing were to deviate or change, we’d have a crisis on our hands, whereas the plumber would not. It took years of work to earn the wisdom he brought to solve the problem. Just because we could only see the simplicity he brought to the problem didn’t mean there wasn’t a deep understanding of the complexity behind it. There is no way we could acquire that insight in a few minutes by watching. We’d need to do it over and over for years, experiencing all of the things that could go wrong.

Thinking is something you have to do by yourself.

Appearances vs Experiences: What Really Makes Us Happy

In the search for happiness, we often confuse how something looks with how it’s likely to make us feel. This is especially true when it comes to our homes. If we want to maximize happiness, we need to prioritize experiences over appearances.

***

Most of us try to make decisions intended to bring us greater happiness. The problem is that we misunderstand how our choices really impact our well-being and end up making ones that have the opposite effect. We buy stuff that purports to inspire happiness and end up feeling depressed instead. Knowing some of the typical pitfalls in the search for happiness—especially the ones that seem to go against common sense—can help us improve quality of life.

It’s an old adage that experiences make us happier than physical things. But knowing is not the same as doing. One area this is all too apparent is when it comes to choosing where to live. You might think that how a home looks is vital to how happy you are living in it. Wrong! The experience of a living space is far more important than its appearance.

The influence of appearance

In Happy City: Transforming Our Lives Through Urban Design, Charles Montgomery explores some of the ways in which we misunderstand how our built environment and the ways we move through cities influence our happiness.

Towards the end of their first year at Harvard, freshmen find out which dormitory they will be living in for the rest of their time at university. Places are awarded via a lottery system, so individual students have no control over where they end up. Harvard’s dormitories are many and varied in their design, size, amenities, age, location, and overall prestige. Students take allocation seriously, as the building they’re in inevitably has a big influence on their experience at university. Or does it?

Montgomery points to two Harvard dormitories. Lowell House, a stunning red brick building with a rich history, is considered the most prestigious of them all. Students clamor to live in it. Who could ever be gloomy in such a gorgeous building?

Meanwhile, Mather House is a much-loathed concrete tower. It’s no one’s first choice. Most students pray for a room in the former and hope to be spared the latter, because they think their university experience will be as awful-looking as the building. (It’s worth noting that although the buildings vary in appearance, neither is lacking any of the amenities a student needs to live. Nor is Mather House in any way decrepit.)

The psychologist Elizabeth Dunn asked a group of freshmen to predict how each of the available dormitories might affect their experience of Harvard. In follow-up interviews, she compared their lived experience with those initial predictions. Montgomery writes:

The results would surprise many Harvard freshmen. Students sent to what they were sure would be miserable houses ended up much happier than they had anticipated. And students who landed in the most desirable houses were less happy than they expected to be. Life in Lowell House was fine. But so was life in the reviled Mather House. Overall, Harvard’s choice dormitories just didn’t make anyone much happier than its spurned dormitories.

Why did students make this mistake and waste so much energy worrying about dormitory allocation? Dunn found that they “put far too much weight on obvious differences between residences, such as location and architectural features, and far too little on things that were not so glaringly different, such as the sense of community and the quality of relationships they would develop in their dormitory.”

Asked to guess if relationships or architecture are more important, most of us would, of course, say relationships. Our behavior, however, doesn’t always reflect that. Dunn further states:

This is the standard mis-weighing of extrinsic and intrinsic values: we may tell each other that experiences are more important than things, but we constantly make choices as though we didn’t believe it.

When we think that the way a building looks will dictate our experience living in it, we are mistaking the map for the territory. Architectural flourishes soon fade into the background. What matters is the day-to-day experience of living there, when relationships matter much more than how things look. Proximity to friends is a higher predictor of happiness than charming old brick.

The impact of experience

Some things we can get used to. Some we can’t. We make a major mistake when we think it’s worthwhile to put up with negative experiences that are difficult to grow accustomed to in order to have nice things. Once again, this happens when we forget that our day-to-day experience is paramount in our perception of our happiness.

Take the case of suburbs. Montgomery describes how many people in recent decades moved to suburbs outside of American cities. There, they could enjoy luxuries like big gardens, sprawling front lawns, wide streets with plenty of room between houses, spare bedrooms, and so on. City dwellers imagined themselves and their families spreading out in spacious, safe homes. But American cities ended up being shaped by flawed logic, as Montgomery elaborates:

Neoclassical economics, which dominated the second half of the twentieth century, is based on the premise that we are all perfectly well equipped to make choices that maximize utility. . . . But the more psychologists and economists examine the relationship between decision-making and happiness, the more they realize that this is simply not true. We make bad choices all the time. . . . Our flawed choices have helped shape the modern city—and consequently, the shape of our lives.

Living in the suburbs comes at a price: long commutes. Many people spend hours a day behind the wheel, getting to and from work. On top of that, the dispersed nature of suburbs means that everything from the grocery store to the gym requires more extended periods of time driving. It’s easy for an individual to spend almost all of their non-work, non-sleep time in their car.

Commuting is, in just about every sense, terrible for us. The more time people spend driving each day, the less happy they are with their life in general. This unhappiness even extends to the partners of people with long commutes, who also experience a decline in well-being. Commuters see their health suffer due to long periods of inactivity and the stress of being stuck in traffic. It’s hard to find the time and energy for things like exercise or seeing friends if you’re always on the road. Gas and car-related expenses can eat up the savings from living outside of the city. That’s not to mention the environmental toll. Commuting is generally awful for mental health, which Montgomery illustrates:

A person with a one-hour commute has to earn 40 percent more money to be as satisfied with life as someone who walks to the office. On the other hand, for a single person, exchanging a long commute for a short walk to work has the same effect on happiness as finding a new love.

So why do we make this mistake? Drawing on the work of psychologist Daniel Gilbert, Montgomery explains that it’s a matter of us thinking we’ll get used to commuting (an experience) and won’t get used to the nicer living environment (a thing.)

The opposite is true. While a bigger garden and spare bedroom soon cease to be novel, every day’s commute is a little bit different, meaning we can never get quite used to it. There is a direct linear downwards relationship between commute time and life satisfaction, but there’s no linear upwards correlation between house size and life satisfaction. As Montgomery says, “The problem is, we consistently make decisions that suggest we are not so good at distinguishing between ephemeral and lasting pleasures. We keep getting it wrong.”

Happy City teems with insights about the link between the design of where we live and our quality of life. In particular, it explores how cities are often shaped by mistaken ideas about what brings us happiness. We maximize our chances at happiness when we prioritize our experience of life instead of acquiring things to fill it with.

Job Interviews Don’t Work

Better hiring leads to better work environments, less turnover, and more innovation and productivity. When you understand the limitations and pitfalls of the job interview, you improve your chances of hiring the best possible person for your needs.

***

The job interview is a ritual just about every adult goes through at least once. They seem to be a ubiquitous part of most hiring processes. The funny thing about them, however, is that they take up time and resources without actually helping to select the best people to hire. Instead, they promote a homogenous workforce where everyone thinks the same.

If you have any doubt about how much you can get from an interview, think of what’s involved for the person being interviewed. We’ve all been there. The night before, you dig out your smartest outfit, iron it, and hope your hair lies flat for once. You frantically research the company, reading every last news article based on a formulaic press release, every blog post by the CEO, and every review by a disgruntled former employee.

After a sleepless night, you trek to their office, make awkward small talk, then answer a set of predictable questions. What’s your biggest weakness? Where do you see yourself in five years? Why do you want this job? Why are you leaving your current job? You reel off the answers you prepared the night before, highlighting the best of the best. All the while, you’re reminding yourself to sit up straight, don’t bite your nails, and keep smiling.

It’s not much better on the employer’s side of the table. When you have a role to fill, you select a list of promising candidates and invite them for an interview. Then you pull together a set of standard questions to riff off, doing a little improvising as you hear their responses. At the end of it all, you make some kind of gut judgment about the person who felt right—likely the one you connected with the most in the short time you were together.

Is it any surprise that job interviews don’t work when the whole process is based on subjective feelings? They are in no way the most effective means of deciding who to hire because they maximize the role of bias and minimize the role of evaluating competency.

What is a job interview?

“In most cases, the best strategy for a job interview is to be fairly honest, because the worst thing that can happen is that you won’t get the job and will spend the rest of your life foraging for food in the wilderness and seeking shelter underneath a tree or the awning of a bowling alley that has gone out of business.”

— Lemony Snicket, Horseradish

When we say “job interviews” throughout this post, we’re talking about the type of interview that has become standard in many industries and even in universities: free-form interviews in which candidates sit in a room with one or more people from a prospective employer (often people they might end up working with) and answer unstructured questions. Such interviews tend to focus on how a candidate behaves generally, emphasizing factors like whether they arrive on time or if they researched the company in advance. While questions may ostensibly be about predicting job performance, they tend to better select for traits like charisma rather than actual competence.

Unstructured interviews can make sense for certain roles. The ability to give a good first impression and be charming matters for a salesperson. But not all roles need charm, and just because you don’t want to hang out with someone after an interview doesn’t mean they won’t be an amazing software engineer. In a small startup with a handful of employees, someone being “one of the gang” might matter because close-knit friendships are a strong motivator when work is hard and pay is bad. But that group mentality may be less important in a larger company in need of diversity.

Considering the importance of hiring and how much harm getting it wrong can cause, it makes sense for companies to study and understand the most effective interview methods. Let’s take a look at why job interviews don’t work and what we can do instead.

Why job interviews are ineffective

Discrimination and bias

Information like someone’s age, gender, race, appearance, or social class shouldn’t dictate if they get a job or not—their competence should. But that’s unfortunately not always the case. Interviewers can end up picking the people they like the most, which often means those who are most similar to them. This ultimately means a narrower range of competencies is available to the organization.

Psychologist Ron Friedman explains in The Best Place to Work: The Art and Science of Creating an Extraordinary Workplace some of the unconscious biases that can impact hiring. We tend to rate attractive people as more competent, intelligent, and qualified. We consider tall people to be better leaders, particularly when evaluating men. We view people with deep voices as more trustworthy than those with higher voices.

Implicit bias is pernicious because it’s challenging to spot the ways it influences interviews. Once an interviewer judges someone, they may ask questions that nudge the interviewee towards fitting that perception. For instance, if they perceive someone to be less intelligent, they may ask basic questions that don’t allow the candidate to display their expertise. Having confirmed their bias, the interviewer has no reason to question it or even notice it in the future.

Hiring often comes down to how much an interviewer likes a candidate as a person. This means that we can be manipulated by manufactured charm. If someone’s charisma is faked for an interview, an organization can be left dealing with the fallout for ages.

The map is not the territory

The representation of something is not the thing itself. A job interview is meant to be a quick snapshot to tell a company how a candidate would be at a job. However, it’s not a representative situation in terms of replicating how the person will perform in the actual work environment.

For instance, people can lie during job interviews. Indeed, the situation practically encourages it. While most people feel uncomfortable telling outright lies (and know they would face serious consequences later on for a serious fabrication), bending the truth is common. Ron Friedman writes, “Research suggests that outright lying generates too much psychological discomfort for people to do it very often. More common during interviews are more nuanced forms of deception which include embellishment (in which we take credit for things we haven’t done), tailoring (in which we adapt our answers to fit the job requirements), and constructing (in which we piece together elements from different experiences to provide better answers.)” An interviewer can’t know if someone is deceiving them in any of these ways. So they can’t know if they’re hearing the truth.

One reason why we think job interviews are representative is the fundamental attribution error. This is a logical fallacy that leads us to believe that the way people behave in one area carries over to how they will behave in other situations. We view people’s behaviors as the visible outcome of innate characteristics, and we undervalue the impact of circumstances.

Some employers report using one single detail they consider representative to make hiring decisions, such as whether a candidate sends a thank-you note after the interview or if their LinkedIn picture is a selfie. Sending a thank-you note shows manners and conscientiousness. Having a selfie on LinkedIn shows unprofessionalism. But is that really true? Can one thing carry across to every area of job performance? It’s worth debating.

Gut feelings aren’t accurate

We all like to think we can trust our intuition. The problem is that intuitive judgments tend to only work in areas where feedback is fast and cause and effect clear. Job interviews don’t fall into that category. Feedback is slow. The link between a hiring decision and a company’s success is unclear.

Overwhelmed by candidates and the pressure of choosing, interviewers may resort to making snap judgments based on limited information. And interviews introduce a lot of noise, which can dilute relevant information while leading to overconfidence. In a study entitled Belief in the Unstructured Interview: The Persistence of an Illusion, participants predicted the future GPA of a set of students. They either received biographical information about the students or both biographical information and an interview. In some of the cases, the interview responses were entirely random, meaning they shouldn’t have conveyed any genuine useful information.

Before the participants made their predictions, the researchers informed them that the strongest predictor of a student’s future GPA is their past GPA. Seeing as all participants had access to past GPA information, they should have factored it heavily into their predictions.

In the end, participants who were able to interview the students made worse predictions than those who only had access to biographical information. Why? Because the interviews introduced too much noise. They distracted participants with irrelevant information, making them forget the most significant predictive factor: past GPA. Of course, we do not have clear metrics like GPA for jobs. But this study indicates that interviews do not automatically lead to better judgments about a person.

We tend to think human gut judgments are superior, even when evidence doesn’t support this. We are quick to discard information that should shape our judgments in favor of less robust intuitions that we latch onto because they feel good. The less challenging information is to process, the better it feels. And we tend to associate good feelings with ‘rightness’.

Experience ≠ expertise in interviewing

In 1979, the University of Texas Medical School at Houston suddenly had to increase its incoming class size by 50 students due to a legal change requiring larger classes. Without time to interview again, they selected from the pool of candidates the school chose to interview, then rejected as unsuitable for admission. Seeing as they got through to the interview stage, they had to be among the best candidates. They just weren’t previously considered good enough to admit.

When researchers later studied the result of this unusual situation, they found that the students whom the school first rejected performed no better or worse academically than the ones they first accepted. In short, interviewing students did nothing to help select for the highest performers.

Studying the efficacy of interviews is complicated and hard to manage from an ethical standpoint. We can’t exactly give different people the same real-world job in the same conditions. We can take clues from fortuitous occurrences, like the University of Texas Medical School change in class size and the subsequent lessons learned. Without the legal change, the interviewers would never have known that the students they rejected were of equal competence to the ones they accepted. This is why building up experience in this arena is difficult. Even if someone has a lot of experience conducting interviews, it’s not straightforward to translate that into expertise. Expertise is about have a predictive model of something, not just knowing a lot about it.

Furthermore, the feedback from hiring decisions tends to be slow. An interviewer cannot know what would happen if they hired an alternate candidate. If a new hire doesn’t work out, that tends to fall on them, not the person who chose them. There are so many factors involved that it’s not terribly conducive to learning from experience.

Making interviews more effective

It’s easy to see why job interviews are so common. People want to work with people they like, so interviews allow them to scope out possible future coworkers. Candidates expect interviews, as well—wouldn’t you feel a bit peeved if a company offered you a job without the requisite “casual chat” beforehand? Going through a grueling interview can make candidates more invested in the position and likely to accept an offer. And it can be hard to imagine viable alternatives to interviews.

But it is possible to make job interviews more effective or make them the final step in the hiring process after using other techniques to gauge a potential hire’s abilities. Doing what works should take priority over what looks right or what has always been done.

Structured interviews

While unstructured interviews don’t work, structured ones can be excellent. In Thinking, Fast and Slow, Daniel Kahneman describes how he redefined the Israel Defense Force’s interviewing process as a young psychology graduate. At the time, recruiting a new soldier involved a series of psychometric tests followed by an interview to assess their personality. Interviewers then based their decision on their intuitive sense of a candidate’s fitness for a particular role. It was very similar to the method of hiring most companies use today—and it proved to be useless.

Kahneman introduced a new interviewing style in which candidates answered a predefined series of questions that were intended to measure relevant personality traits for the role (for example, responsibility and sociability). He then asked interviewers to give candidates a score for how well they seemed to exhibit each trait based on their responses. Kahneman explained that “by focusing on standardized, factual questions I hoped to combat the halo effect, where favorable first impressions influence later judgments.” He tasked interviewers only with providing these numbers, not with making a final decision.

Although interviewers at first disliked Kahneman’s system, structured interviews proved far more effective and soon became the standard for the IDF. In general, they are often the most useful way to hire. The key is to decide in advance on a list of questions, specifically designed to test job-specific skills, then ask them to all the candidates. In a structured interview, everyone gets the same questions with the same wording, and the interviewer doesn’t improvise.

Tomas Chamorro-Premuzic writes in The Talent Delusion:

There are at least 15 different meta-analytic syntheses on the validity of job interviews published in academic research journals. These studies show that structured interviews are very useful to predict future job performance. . . . In comparison, unstructured interviews, which do not have a set of predefined rules for scoring or classifying answers and observations in a reliable and standardized manner, are considerably less accurate.

Why does it help if everyone hears the same questions? Because, as we learned previously, interviewers can make unconscious judgments about candidates, then ask questions intended to confirm their assumptions. Structured interviews help measure competency, not irrelevant factors. Ron Friedman explains this further:

It’s also worth having interviewers develop questions ahead of time so that: 1) each candidate receives the same questions, and 2) they are worded the same way. The more you do to standardize your interviews, providing the same experience to every candidate, the less influence you wield on their performance.

What, then, is an employer to do with the answers? Friedman says you must then create clear criteria for evaluating them.

Another step to help minimize your interviewing blind spots: include multiple interviewers and give them each specific criteria upon which to evaluate the candidate. Without a predefined framework for evaluating applicants—which may include relevant experience, communication skills, attention to detail—it’s hard for interviewers to know where to focus. And when this happens, fuzzy interpersonal factors hold greater weight, biasing assessments. Far better to channel interviewers’ attention in specific ways, so that the feedback they provide is precise.

Blind auditions

One way to make job interviews more effective is to find ways to “blind” the process—to disguise key information that may lead to biased judgments. Blinded interviews focus on skills alone, not who a candidate is as a person. Orchestras offer a remarkable case study in the benefits of blinding.

In the 1970s, orchestras had a gender bias problem. A mere 5% of their members were women, on average. Orchestras knew they were missing out on potential talent, but they found the audition process seemed to favor men over women. Those who were carrying out auditions couldn’t sidestep their unconscious tendency to favor men.

Instead of throwing up their hands in despair and letting this inequality stand, orchestras began carrying out blind auditions. During these, candidates would play their instruments behind a screen while a panel listened and assessed their performance. They received no identifiable information about candidates. The idea was that orchestras would be able to hire without room for bias. It took a bit of tweaking to make it work – at first, the interviewers were able to discern gender based on the sound of a candidate’s shoes. After that, they requested that people interview without their shoes.

The results? By 1997, up to 25% of orchestra members were women. Today, the figure is closer to 30%.

Although this is sometimes difficult to replicate for other types of work, blind auditions can provide an inspiration to other industries that could benefit from finding ways to make interviews more about a person’s abilities than their identity.

Competency-related evaluations

What’s the best way to test if someone can do a particular job well? Get them to carry out tasks that are part of the job. See if they can do what they say they can do. It’s much harder for someone to lie and mislead an interviewer during actual work than during an interview. Using competency tests for a blinded interview process is also possible—interviewers could look at depersonalized test results to make unbiased judgments.

Tomas Chamorro-Premuzic writes in The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human Potential, “The science of personnel selection is over a hundred years old yet decision-makers still tend to play it by ear or believe in tools that have little academic rigor. . . . An important reason why talent isn’t measured more scientifically is the belief that rigorous tests are difficult and time-consuming to administer, and that subjective evaluations seem to do the job ‘just fine.’”

Competency tests are already quite common in many fields. But interviewers tend not to accord them sufficient importance. They come after an interview, or they’re considered secondary to it. A bad interview can override a good competency test. At best, interviewers accord them equal importance to interviews. Yet they should consider them far more important.

Ron Friedman writes, “Extraneous data such as a candidate’s appearance or charisma lose their influence when you can see the way an applicant actually performs. It’s also a better predictor of their future contributions because unlike traditional in-person interviews, it evaluates job-relevant criteria. Including an assignment can help you better identify the true winners in your applicant pool while simultaneously making them more invested in the position.”

Conclusion

If a company relies on traditional job interviews as its sole or main means of choosing employees, it simply won’t get the best people. And getting hiring right is paramount to the success of any organization. A driven team of people passionate about what they do can trump one with better funding and resources. The key to finding those people is using hiring techniques that truly work.

Why You Feel At Home In A Crisis

When disaster strikes, people come together. During the worst times of our lives, we can end up experiencing the best mental health and relationships with others. Here’s why that happens and how we can bring the lessons we learn with us once things get better.

***

“Humans don’t mind hardship, in fact they thrive on it; what they mind is not feeling necessary. Modern society has perfected the art of making people not feel necessary.”

— Sebastian Junger

The Social Benefits of Adversity

When World War II began to unfold in 1939, the British government feared the worst. With major cities like London and Manchester facing aerial bombardment from the German air force, leaders were sure societal breakdown was imminent. Civilians were, after all, in no way prepared for war. How would they cope with a complete change to life as they knew it? How would they respond to the nightly threat of injury or death? Would they riot, loot, experience mass-scale psychotic breaks, go on murderous rampages, or lapse into total inertia as a result of exposure to German bombing campaigns?

Robert M. Titmuss writes in Problems of Social Policy that “social distress, disorganization, and loss of morale” were expected. Experts predicted 600,000 deaths and 1.2 million injuries from the bombings. Some in the government feared three times as many psychiatric casualties as physical ones. Official reports pondered how the population would respond to “financial distress, difficulties of food distribution, breakdowns in transport, communications, gas, lighting, and water supplies.”

After all, no one had lived through anything like this. Civilians couldn’t receive training as soldiers could, so it stood to reason they would be at high risk of psychological collapse. Titmus writes, “It seems sometimes to have been expected almost as a matter of course that widespread neurosis and panic would ensue.” The government contemplated sending a portion of soldiers into cities, rather than to the front lines, to maintain order.

Known as the Blitz, the effects of the bombing campaign were brutal. Over 60,000 civilians died, about half of them in London. The total cost of property damage was about £56 billion in today’s money, with almost a third of the houses in London becoming uninhabitable.

Yet despite all this, the anticipated social and psychological breakdown never happened. The death toll was also much lower than predicted, in part due to stringent adherence to safety instructions. In fact, the Blitz achieved the opposite of what the attackers intended: the British people proved more resilient than anyone predicted. Morale remained high, and there didn’t appear to be an increase in mental health problems. The suicide rate may have decreased. Some people with longstanding mental health issues found themselves feeling better.

People in British cities came together like never before to organize themselves at the community level. The sense of collective purpose this created led many to experience better mental health than they’d ever had. One indicator of this is that children who remained with their parents fared better than those evacuated to the safety of the countryside. The stress of the aerial bombardment didn’t override the benefits of staying in their city communities.

The social unity the British people reported during World War II lasted in the decades after. We can see it in the political choices the wartime generation made—the politicians they voted into power and the policies they voted for. By some accounts, the social unity fostered by the Blitz was the direct cause of the strong welfare state that emerged after the war and the creation of Britain’s free national healthcare system. Only when the wartime generation started to pass away did that sentiment fade.

We know how to Adapt to Adversity

We may be ashamed to admit it, but human nature is more at home in a crisis.

Disasters force us to band together and often strip away our differences. The effects of World War II on the British people were far from unique. The Allied bombing of Germany also strengthened community spirit. In fact, cities that suffered the least damage saw the worst psychological consequences. Similar improvements in morale occurred during other wars, riots, and after September 11, 2001.

When normality breaks down, we experience the sort of conditions we evolved to handle. Our early ancestors lived with a great deal of pain and suffering. The harsh environments they faced necessitated collaboration and sharing. Groups of people who could work together were most likely to survive. Because of this, evolution selected for altruism.

Among modern foraging tribal groups, the punishments for freeloading are severe. Execution is not uncommon. As severe as this may seem, allowing selfishness to flourish endangers the whole group. It stands to reason that the same was true for our ancestors living in much the same conditions. Being challenged as a group by difficult changes in our environment leads to incredible community cohesion.

Many of the conditions we need to flourish both as individuals and as a species emerge during disasters. Modern life otherwise fails to provide them. Times of crisis are closer to the environments our ancestors evolved in. Of course, this does not mean that disasters are good. By their nature, they produce immense suffering. But understanding their positive flip side can help us to both weather them better and bring important lessons into the aftermath.

Embracing Struggle

Good times don’t actually produce good societies.

In Tribe: On Homecoming and Belonging, Sebastian Junger argues that modern society robs us of the solidarity we need to thrive. Unfortunately, he writes, “The beauty and the tragedy of the modern world is that it eliminates many situations that require people to demonstrate commitment to the collective good.” As life becomes safer, it is easier for us to live detached lives. We can meet all of our needs in relative isolation, which prevents us from building a strong connection to a common purpose. In our normal day to day, we rarely need to show courage, turn to our communities for help, or make sacrifices for the sake of others.

Furthermore, our affluence doesn’t seem to make us happier. Junger writes that “as affluence and urbanization rise in a society, rates of depression and suicide tend to go up, not down. Rather than buffering people from clinical depression, increased wealth in society seems to foster it.” We often think of wealth as a buffer from pain, but beyond a certain point, wealth can actually make us more fragile.

The unexpected worsening of mental health in modern society has much to do with our lack of community—which might explain why times of disaster, when everyone faces the breakdown of normal life, can counterintuitively improve mental health, despite the other negative consequences. When situations requiring sacrifice do reappear and we must work together to survive, it alleviates our disconnection from each other. Disaster increases our reliance on our communities.

In a state of chaos, our way of relating to each other changes. Junger explains that “self-interest gets subsumed into group interest because there is no survival outside of group survival, and that creates a social bond that many people sorely miss.” Helping each other survive builds ties stronger than anything we form during normal conditions. After a natural disaster, residents of a city may feel like one big community for the first time. United by the need to get their lives back together, individual differences melt away for a while.

Junger writes particularly of one such instance:

The one thing that might be said for societal collapse is that—for a while at least—everyone is equal. In 1915 an earthquake killed 30,000 people in Avezzano, Italy, in less than a minute. The worst-hit areas had a mortality rate of 96 percent. The rich were killed along with the poor, and virtually everyone who survived was immediately thrust into the most basic struggle for survival: they needed food, they needed water, they needed shelter, and they needed to rescue the living and bury the dead. In that sense, plate tectonics under the town of Avezzano managed to recreate the communal conditions of our evolutionary past quite well.

Disasters bring out the best in us. Junger goes on to say that “communities that have been devastated by natural or manmade disasters almost never lapse into chaos and disorder; if anything they become more just, more egalitarian, and more deliberately fair to individuals.” When catastrophes end, despite their immense negatives, people report missing how it felt to unite for a common cause. Junger explains that “what people miss presumably isn’t danger or loss but the unity that these things often engender.” The loss of that unification can be, in its own way, traumatic.

Don’t be Afraid of Disaster

So what can we learn from Tribe?

The first lesson is that, in the face of disaster, we should not expect the worst from other people. Yes, instances of selfishness will happen no matter what. Many people will look out for themselves at the expense of others, not least the ultra-wealthy who are unlikely to be affected in a meaningful way and so will not share in the same experience. But on the whole, history has shown that the breakdown of order people expect is rare. Instead, we find new ways to continue and to cope.

During World War II, there were fears that British people would resent the appearance of over two million American servicemen in their country. After all, it meant more competition for scarce resources. Instead, the “friendly invasion” met with a near-unanimous warm welcome. British people shared what they had without bitterness. They understood that the Americans were far from home and missing their loved ones, so they did all they could to help. In a crisis, we can default to expecting the best from each other.

Second, we can achieve a great deal by organizing on the community level when disaster strikes. Junger writes, “There are many costs to modern society, starting with its toll on the global ecosystem and working one’s way down to its toll on the human psyche, but the most dangerous may be to community. If the human race is under threat in some way that we don’t yet understand, it will probably be at a community level that we either solve the problem or fail to.” When normal life is impossible, being able to volunteer help is an important means of retaining a sense of control, even if it imposes additional demands. One explanation for the high morale during the Blitz is that everyone could be involved in the war effort, whether they were fostering a child, growing cabbages in their garden, or collecting scrap metal to make planes.

For our third and final lesson, we should not forget what we learn about the importance of banding together. What’s more, we must do all we can to let that knowledge inform future decisions. It is possible for disasters to spark meaningful changes in the way we live. We should continue to emphasize community and prioritize stronger relationships. We can do this by building strong reminders of what happened and how it impacted people. We can strive to educate future generations, teaching them why unity matters.

(In addition to Tribe, many of the details of this post come from Disasters and Mental Health: Therapeutic Principles Drawn from Disaster Studies by Charles E. Fritz.)

Stop Preparing For The Last Disaster

When something goes wrong, we often strive to be better prepared if the same thing happens again. But the same disasters tend not to happen twice in a row. A more effective approach is simply to prepare to be surprised by life, instead of expecting the past to repeat itself.

***

If we want to become less fragile, we need to stop preparing for the last disaster.

When disaster strikes, we learn a lot about ourselves. We learn whether we are resilient, whether we can adapt to challenges and come out stronger. We learn what has meaning for us, we discover core values, and we identify what we’re willing to fight for. Disaster, if it doesn’t kill us, can make us stronger. Maybe we discover abilities we didn’t know we had. Maybe we adapt to a new normal with more confidence. And often we make changes so we will be better prepared in the future.

But better prepared for what?

After a particularly trying event, most people prepare for a repeat of whatever challenge they just faced. From the micro level to the macro level, we succumb to the availability bias and get ready to fight a war we’ve already fought. We learn that one lesson, but we don’t generalize that knowledge or expand it to other areas. Nor do we necessarily let the fact that a disaster happened teach us that disasters do, as a rule, tend to happen. Because we focus on the particulars, we don’t extrapolate what we learn to identifying what we can better do to prepare for adversity in general.

We tend to have the same reaction to challenge, regardless of the scale of impact on our lives.

Sometimes the impact is strictly personal. For example, our partner cheats on us, so we vow never to have that happen again and make changes designed to catch the next cheater before they get a chance; in future relationships, we let jealousy cloud everything.

But other times, the consequences are far reaching and impact the social, cultural, and national narratives we are a part of. Like when a terrorist uses an airplane to attack our city, so we immediately increase security at airports so that planes can never be used again to do so much damage and kill so many people.

The changes we make may keep us safe from a repeat of those scenarios that hurt us. The problem is, we’re still fragile. We haven’t done anything to increase our resilience—which means the next disaster is likely to knock us on our ass.

Why do we keep preparing for the last disaster?

Disasters cause pain. Whether it’s emotional or physical, the hurt causes vivid and strong reactions. We remember pain, and we want to avoid it in the future through whatever means possible. The availability of memories of our recent pain informs what we think we should do to stop it from happening again.

This process, called the availability bias, has significant implications for how we react in the aftermath of disaster. Writing in The Legal Analyst: A Toolkit for Thinking about the Law about the information cascades this logical fallacy sets off, Ward Farnsworth says they “also help explain why it’s politically so hard to take strong measures against disasters before they have happened at least once. Until they occur they aren’t available enough to the public imagination to seem important; after they occur their availability cascades and there is an exaggerated rush to prevent the identical thing from happening again. Thus after the terrorist attacks on the World Trade Center, cutlery was banned from airplanes and invasive security measures were imposed at airports. There wasn’t the political will to take drastic measures against the possibility of nuclear or other terrorist attacks of a type that hadn’t yet happened and so weren’t very available.”

In the aftermath of a disaster, we want to be reassured of future safety. We lived through it, and we don’t want to do so again. By focusing on the particulars of a single event, however, we miss identifying the changes that will improve our chances of better outcomes next time. Yes, we don’t want any more planes to fly into buildings. But preparing for the last disaster leaves us just as underprepared for the next one.

What might we do instead?

We rarely take a step back and go beyond the pain to look at what made us so vulnerable to it in the first place. However, that’s exactly where we need to start if we really want to better prepare ourselves for future disaster. Because really, what most of us want is to not be taken by surprise again, caught unprepared and vulnerable.

The reality is that the same disaster is unlikely to happen twice. Your next lover is unlikely to hurt you in the same way your former one did, just as the next terrorist is unlikely to attack in the same way as their predecessor. If we want to make ourselves less fragile in the face of great challenge, the first step is to accept that you are never going to know what the next disaster will be. Then ask yourself: How can I prepare anyway? What changes can I make to better face the unknown?

As Andrew Zolli and Ann Marie Healy explain in Resilience: Why Things Bounce Back, “surprises are by definition inevitable and unforeseeable, but seeking out their potential sources is the first step toward adopting the open, ready stance on which resilient responses depend.”

Giving serious thought to the range of possible disasters immediately makes you aware that you can’t prepare for all of them. But what are the common threads? What safeguards can you put in place that will be useful in a variety of situations? A good place to start is increasing your adaptability. The easier you can adapt to change, the more flexibility you have. More flexibility means having more options to deal with, mitigate, and even capitalize on disaster.

Another important mental tool is to accept that disasters will happen. Expect them. It’s not about walking around every day with your adrenaline pumped in anticipation; it’s about making plans assuming that they will get derailed at some point. So you insert backup systems. You create a cushion, moving away from razor-thin margins. You give yourself the optionality to respond differently when the next disaster hits.

Finally, we can find ways to benefit from disaster. Author and economist Keisha Blair, in Holistic Wealth, suggests that “building our resilience muscles starts with the way we process the negative events in our lives. Mental toughness is a prerequisite for personal growth and success.” She further writes, “adversity allows us to become better rounded, richer in experience, and to strengthen our inner resources.” We can learn from the last disaster how to grow and leverage our experiences to better prepare for the next one.