Announcing Two Workbooks on Better Arguments and Good Work

BAWORKplaceGRAPHIC.jpg

At a time when divisions between people feel like they are deepening, we believe that encouraging Better Arguments can lead to Good Work.

Check out two new workbooks we created in partnership with the Better Arguments Project, a collaboration between the Aspen Institute, Facing History and Ourselves, and The Allstate Corporation.

The first workbook is intended to spark dialogue in workplaces, while the second is intended for educators and schools. Both workbooks use an original dilemma from The Good Project as a tool for exploring the five principles of better arguments and the three Es of good work.

We hope these resources will assist all people in learning how to have better, more productive arguments. Click the buttons below to access.

Workplace Workbook
Education Workbook

This resource and others can also be found at http://betterarguments.org.

Palantir and The Two Forms of Synthesis

by Howard Gardner

Until recently, only those “in the know” had heard of the corporation named Palantir. But of late, it has come into the spotlight. For investors, on October 1, 2020, Palantir had an initial public offering on the New York Stock Exchange—book value in the neighborhood of twenty billion dollars. For newspaper readers, on October 25, 2020, Palantir was the cover story in the Sunday Magazine of The New York Times.

What is it? Palantir is a company that specializes in data analysis. It takes huge amounts of data, in almost any area, and, using artificial intelligence (AI) algorithms, organizes the data in ways that are seen as useful by the client. According to The Economist of August 29, 2020, “The company sells programs that gather disparate data and organizes them for something usable for decision-makers, from soldiers in Afghanistan to executives at energy firms.” Then, in The Economist fashion, follows the wry comment: “More than a technology project, it is a philosophical and political one.”

To this point, most of Palantir’s work has been for governments—clearly the United States government (particularly the CIA and Defense Department), but also for other governments as well—though only those governments believed to be friendly to the interests of the United States. While Palantir’s actual work is kept secret, it’s widely believed to locate sensitive targets (including the location of Osama bin Laden as well as of undocumented immigrants and criminals on the run); identify regions that are dangerous for US soldiers or local police; trace the locations and spread of diseases (like COVID-19); and locate markets for commercial products. Of course, approaches used for one purpose in one place can be re-purposed for use elsewhere.

Palantir is the brainchild of two individuals. Peter Thiel, hitherto the better known one, was a co-founder of Pay Pal and is also one of the few Silicon Valley executives to have publicly supported Donald Trump’s 2016 campaign for the presidency. Alex Karp, a law school graduate with a doctorate in political philosophy from Goethe University in Frankfurt describes himself as a person on the left of the political spectrum.

Not surprisingly, given the mysterious work that it does and the apparently different political leanings of the co-founders, there is a lot of chatter about whether Palantir does good work. One is reminded of the debate on whether Google lives up to its  promised slogan, “Don’t be evil.”

But to ask whether a company does good work is to commit what philosophers call a “category error.” 

First of all, though the Supreme Court may consider a corporation to be an individual (Citizens United v. Federal Election Commission 2010), that characterization makes no sense in common language or—in my view—in common sense. Companies make products and offer services, but who asks for these and how they are used cannot be credited to or blamed on the company per se. For over a century, General Motors (GM) has built motor vehicles—but those vehicles could be ambulances that transport the injured to hospitals or tanks that are used to wage unjustified wars.  For over half a century, IBM has sold computers, but those computers could be used to track health factors or to guide missiles.

Second, even determining precisely what a company does, and to or for whom, may not reveal whether the work itself is good or bad. That decision also depends on what we as “deciders” consider to be good—is the missile being aimed at Osama bin Laden or Angela Merkel or Pope Francis? Do we think that none, some, or all of these individuals should be so located and then murdered? Is the hospital being used to treat those with serious illnesses or to hide terrorists? Indeed, despite the red cross on display, is it actually a hospital?

This is not to invalidate the idea of corporate social responsibility—but even if the leadership of a corporation is well motivated, it can scarcely prevent abuses of its products.

So far, my examples pertain to cases that can be understood by lay persons (like me). This is decidedly NOT the case with the work that Palantir does—work that I would call “synthesizing  vast amounts of data.” The means of synthesizing are very complex—for short, I will call them “AI syntheses.” These synthesizing programs have been devised because the actual “data crunching” is so complicated and time consuming that it would not be possible for human beings to accomplish the task in human time. Even more concerning, it is quite likely that no one quite understands how the patterns, the arrangements, “the answers” have been arrived at.    

And so I think it is important to distinguish between two kinds of synthesizing—what I call AI Synthesizing and Human Synthesizing.  It’s the latter that particularly deserves scrutiny.

First, AI Synthesizing:

Think: How do we distinguish one face from another or group different versions of the same face?   “Deep learning” programs can do so reliably, even if we can’t explain how they accomplish this feat. So, too, winning at chess or “Go”—the program works even though we can’t state quite how. And, building up in complexity, the kind of synthesizing that Palantir apparently does—identifying markets for products, figuring out promising targets for attack or defense, or discerning the cause(s), the spread, or the cure)(s) for a diseases. The human mind boggles.

Work of this sort generates a variety of questions:

What is the purpose and use of the synthesizing?

Who decides which questions/problems are to be addressed?

Which data are included for analysis and synthesis, and which ones are not?  How is that determination made?

By which algorithms are the data being clustered and re-clustered? 

Can the parameters of the algorithm be changed and by whom and under what circumstances? 

Will the data themselves (and the algorithms used thereupon) be kept secret or made public?  Will they be available for other uses at other times?

Importantly, who owns the data?

Which individuals (or which programs) examine the results/findings/patterns and decide what to do with them? Or what not to do? And where does the responsibility for consequences of that decision lie?

Who has access to the data and the synthesis? What is private, public, destroyable, permanently available?

What happens if no one understand the nature of the output…Or how to interpret it?   

These questions would have made little sense several decades ago; but now, with programs getting ever more facile and more recondite, they are urgent and need to be addressed.

Here’s my layperson’s view:  I do not object to Palantir in principle. I think it’s legitimate to employ its technology and its techniques—to allow AI synthesis.

Enter Human Synthesis.

With regard to the questions just posted: I do not want decisions about initial questions or goals for the  enterprise, relevant data, the interpretation or uses of results to be made by a program, no matter how sophisticated or ingenious. Such decisions need to be made by human beings who are aware of and responsible for possible consequences of these “answers.” The buck stops with members of our species and not with the programs that we have enabled. The fact that the actual data crunching may be too complex for human understanding should not allow human beings to wash their hands off the matter, or to pass on responsibility to strings of 0s and 1s.  

And so, when I use the phrase “human synthesis” I am referring to the crucial analysis and decisions about which questions to ask, which problems to tackle, which programs to use—and then, when the data or findings emerge, how to interpret them, apply them, share them, or perhaps even decide to bury them forever.   

For more on human synthesis—and the need to preserve and honor it in an AI world, please see the concluding chapters of my memoir A Synthesizing Mind.

Reference

Michael Steinberger, “The All-Seeing Eye,” The New York Times Magazine, October 25, 2020.

© Howard Gardner 2020

I thank Shelby Clark, Ashley Lee, Kirsten McHugh,  Danny Mucinskas, and Ellen Winner for their helpful comments

Teaching Good Work: Announcing Our New Lesson Plans

Click the cover page to access our lesson plans.

Click the cover page to access our lesson plans.

The Good Project is excited to announce the release of a new and comprehensive set of lesson plans focused on teaching the principles and strategies of excellent, ethical, and engaging “good work.”

Click here to download the lesson plans booklet.

The new lesson plans are freely accessible. They were designed for secondary school students but are adaptable to any audience.

The sequence of lessons will guide students to think deeply about The Good Project’s framework of “good work,” to develop reflective habits that will allow them to navigate complexity, to fully understand and articulate their own beliefs and values, and to make informed decisions in the future. 

The full packet consists of the following elements:

  • Introductory material to familiarize teachers with The Good Project’s approach and theory of change

  • 16 full lesson plans, each of which includes an overarching goal, specific directions, assessment recommendations, and a set of tools and worksheets

  • 4 unit rubrics with designated criteria to measure progress towards lesson goals

  • A set of appendices with further information and suggestions

Students will collect the work they generate throughout the curriculum in a portfolio that can then be evaluated as a demonstration of skills learned. 

We would like to thank The Argosy Foundation for providing the generous funding that made this work possible. The Good Project has also received significant support from The Saul Zaentz Charitable Foundation, The Endeavour Foundation, and additional anonymous funders. 

We also extend our appreciation to the educators who reviewed and provided feedback on previous drafts of the lesson plans. 

The Good Project encourages educators who are planning to use this resource to reach out to us on our contact page here, where we also welcome questions and other inquiries.

October Wrap-Up: 5 Articles Worth Reading

Kirsten McHugh, The Good Project Researcher

While it’s important to stay informed, the deluge of election and pandemic coverage can at times become overwhelming. Take a momentary break to reset and check out what we have been reading this month.

  1. The Harvard Business Review recently published a piece by Kristi Hedges in which she references The Good Project’s Value Sort in an article on the importance of finding a workplace that reflects your own values. Hedges argues that going into an interview with a firm understanding of your own values will allow you to ask the kinds of questions that get at the heart of what is truly important at the company you are considering.

  2. When considering the scale of a global pandemic and talks of a looming threat to US democracy, cheating in your studies might seem like small potatoes. That being said, you during this politically polarized time, you might wonder where all the honest people have gone. In this vein, The Chronicle of Higher Education explores cheating in their recent article “Students Cheat. How Much Does It Matter?”. 

  3. New technology always promises to make our lives easier, but do the small gains actually deliver us to a better tomorrow? In a recent Harvard Gazette article, Ben Boothman examines the promise of AI in contrast to the potential ethical pitfalls to society embedded within the code.

  4. Check out a review of Dr. Howard Gardner’s latest book, A Synthesizing Mind: A Memoir From The Creator Of Multiple Intelligences Theory featured in The Washington Post. Gardner was also interviewed about his memoir by R. Bruce Rich at the Harvard Advanced Leadership Initiative.

  5. The Black Lives Matter Movement has many of us asking ourselves how we can do more in our professional lives to end systemic racism. This month, The Atlantic’s Kristina Rizga interviews veteran history teacher Robert Roth and learns how Roth has spent his career fighting for racial justice through his teaching.

From all of us at The Good Project, have a Happy (and safe) Halloween!

The Costs of Meritocracy:  Two Destructive Forms of Being “Smart”

by Howard Gardner (with comment by Michael Sandel)

Michael Sandel, highly esteemed political philosopher at Harvard, has written The Tyranny of Meritocracy—a powerful indictment of contemporary society—especially the versions in the United States and England. In this provocative book , Sandel reflects at length about the importance nowadays of being ‘smart’.  As one who has spent four decades critiquing the use of the word “intelligent” I paid careful attention to Sandel’s words and his case.

Coined in the middle 1950s by British social analyst Michael Young, “meritocracy” denotes a state of affairs: a once aristocratic, inherited society is taken over by individuals presumed to be more talented and more appropriate leaders  for the various sectors of the sector. At first blush, this transfer of power and authority sounds good and right—we should be led and inspired by people of ability (think: House of Commons), rather than by people who inherit their  wealth, title, and position (think: House of Lords). Even though Young wrote in an ironic spirit—do we really want the students with the highest grades in school to be entrusted with decision about war, peace, trade, health, and the like—the concept of meritocracy has come to be used positively. Indeed, both Presidents Clinton and Obama spoke explicitly and continuously about the importance of a society in which merit is awarded… and awarded again.

Very important for these and other contemporary leaders is “being smart.” In these days of Google counts, we no longer have to wave our hands about such an assertion. President Obama talked explicitly about “smart” over and over again—in his own words, smart policy,  smart foreign policy, smart regulations, smart growth, smart spending cuts, smart grids, smart technologies. Overall, he used the adjective “smart” in connection with politics and programs more than 900 times! So, too, did his meritocratically- disposed predecessor Bill Clinton. 

In fact, even Donald Trump, in so many ways different from these Democrats, insists over and over again that he is smart, “very smart”; his cabinet has the highest possible IQ: his uncle was a professor at MIT; he brags about his family’s matriculation at the Wharton School; Joe Biden is “slow”; indeed, in the debate on September 29 of this year, he pounced on Biden’s use of the word “smart” and denigrated his opponent’s intellect and school grades.  

The exuberance about intellect transcends party lines and epochs—indeed, Sandel might claim, there is not even a counter-story. No one explicitly calls for the return of a hereditary aristocracy or even of inherited wealth and positions…. though Trump does profess to love “the poorly educated.”

Sandel takes his critique very far.  As his title suggests, a celebration of—or even a reluctant surrender to—meritocracy has proved to be disastrous for the contemporary world. On his account: individuals who do well in school and on standardized tests get to attend elite, selective colleges; secure well-paying jobs with concomitant “perks”;  and pass on these social benefits to their children. The statistics are overwhelming, irrefutable, chilling. And even those meritocrats who acknowledge that they may not be wholly responsible for their own success cannot help looking down on those who have not done as well in the Darwinian struggle for worldly success.  

More seriously and more destructively—on Sandel’s account—those who have not attended or failed to graduate from college, and may not even have a steady “respectable” job, feel frowned upon, ignored, or deemed to be “deplorables” mired in “fly-over country.” Ultimately, this state of affairs leads to a society at war with itself, and, quite possibly, the end to democracy and the American (or another national) dream. 

Sandel proposes two kinds of solution: 1) technological—for example,  changing radically the way that one selects among applicants for admission to elite colleges; 2) communal and even spiritual—considering all citizens as equally worthy of respect and conveying that respect in every possible way.

Sandel’s impressive  (but also depressing) account stimulates two lines of thought—both connected to my own decades-long reflection on intelligence. As most readers of this blog will know, I took the lead in challenging the notion of a single intelligence, as measured by an IQ or SAT test, and in calling instead for a recognition of different kinds of intelligence, and perhaps as well, an honoring of these different kinds of minds. While notions like “social” or “emotional” intelligence have entered into public discourse, they do not emerge in Sandel’s analysis.

That’s OK by me. But to nuance Sandel’s analysis, I’d suggest that the kinds of intelligence or intelligences honored in 2020 are quite different from those that were valued in earlier epochs. As just one example: 150 years ago, admission to selective colleges required mastery of ancient languages—so-called linguistic intelligence. Nowadays, no one cares about languages (let alone classical ones), but coding and computing intelligences (logical-mathematical intelligence) is at a premium. And as machines get “smarter”, we may well be selecting for yet different kinds of intelligence—ones that are not relevant to machines—such as musical, bodily, or personal intelligences. The  word “smart” may not change—but the knowledge and skills to which it refers can and does change radically. And indeed, some of our most successful entrepreneurs—see Bill Gates and Steve Jobs—never even completed college because their temperament and ambitions were misaligned to the agenda of college. Ultimately, of course, they received their share of honorary degrees. Even Donald Trump, who apparently had someone else take his SATs and refuses to reveal his college grades, clearly has “media” intelligence.

So much for smartness—where, as I say, Sandel’s argument poses no problem for me. But I have considerable unease with his overall recommendation—that meritocracy should be replaced by conferring dignity on all human beings.  As I read Sandel, all human beings are worthy of dignity or respect (I prefer the latter term), independent of who they are, how they behave, how they think about the world.  This may sound reasonable at first blush, but it’s not the way that I conceive the issue.

My view: As  they grow—indeed, as we grow—individuals should be expected to behave with respect towards others, both those known to them and  those who are strangers. And when faced with challenging issues or ideas, all human beings should attempt to deal with them as sensitively and sensibly as possible.  Millionaires or even billionaires should not be treated with respect because of the money that they have inherited or amassed; rather, they need to earn that status by how they behave, and to be deprived of that status when they misbehave. By the same token, the plumber or electrician or waiter—three examples frequently used by Sandel and other philosophically oriented analysts—are entitled to as much respect and dignity as the rich person, but not just by dint of their vocation….but rather in light of how they behave toward others, normally, day in and day out.

Of course, how we behave toward others is not something that we are born with. Rather it’s what we garner from family, neighbors, friends, lessons in school and in religious settings, from what we read and view in schools, in movie houses, and nowadays, especially, online.  And here is where my intuitions may differ from  those of Michael Sandel. I don’t think that good, moral, respectful behavior is any more or any less likely from those who win the meritocratic laurels than from those who for whatever reason do not seek or display those laurels.  

In neither case is one’s deportment toward others a function of intelligence—however it’s defined and/or measured. As I have often argued, an intelligence can be used positively or destructively. Both Goethe and Goebbels had high linguistic intelligence in German; Goethe wrote estimable poetry, Goebbels fomented hatred. Both Mandela and Milosevic had plenty of interpersonal; intelligence—Mandela brought together long hostile parts of the South African population, Milosevic fomented ethnic cleansing.

Whether smart or not smart in one or another way (whatever one’s array of intelligences), whether a winner or a loser in a particular meritocratic sweepstakes (whether a CEO or a blue collar worker) is independent of whether one is worthy of respect or dignity.  One develops those assets in the course of life—it’s never too early but it may never be too late either.  And a society in which individuals respect one another for how they relate to others is the one in which I would like to live.

© Howard Gardner 2020

Comment by Michael Sandel:

Howard, I think there is some confusion here. I do believe that all human beings are worthy of dignity or respect, independent of who they are and how they behave. This is the basic Kantian idea underlying respect for human rights.  It has nothing to do with intelligence, whether of one kind or many kinds.  Even a war criminal such as Milosevic, for example, is worthy of respect in this Kantian sense.  Though he deserves moral condemnation and punishment for his crimes, it would be wrong to torture him.  I doubt we disagree about this. (You’ll tell me if I’m wrong.)

But Kantian respect for persons as persons, or human dignity as such, is not my alternative to meritocracy.  By emphasizing the dignity of work, I am proposing that we broaden our understanding of what counts as contributing to the common good beyond the value the labor market assigns to our contributions.  This is why I emphasize “contributive justice,” by which I mean conferring appropriate social recognition and esteem on valuable contributions that the market may not properly recognize (such as care work, for example, or the work now being performed “essential workers” during the pandemic).

You rightly draw our attention to yet a third basis of social regard or esteem, having to do with how people behave, whether they treat others with respect, and so on.  So we might distinguish three different grounds of respect:

 (1) Kantian respect for human dignity, which requires that we respect everyone’s human rights, regardless of what work they do or how well they behave;

(2) Respect for the dignity of work, which requires that we accord social recognition to those who make valuable contributions to the common good (typically but not only through work; unpaid community service should certainly count).

(3) Respect or admiration for those who behave morally, which includes treating others with respect but also includes other praise-worthy behavior.

In the book, my primary alternative to meritocracy is #2. But this is not inconsistent with affirming #1 and #3.  I certainly do not think “that good, moral, respectful behavior” is more likely “from those who win the meritocratic laurels than from those who for whatever reason do not seek or display those laurels.” So this is not a point of disagreement between us. 

Response by Howard:

Thanks, Michael, for this very thoughtful and helpful clarification.  I think we are broadly in agreement. I’m not confident that we can simply instruct or encourage individuals to honor all work equally—though it’s been a goal of social reformers for centuries. I have slightly more confidence that we can instruct or encourage individuals to distinguish between highly-paid work, on the one hand, and ‘good work’—work that is excellent, engaging, and ethical, on the other. But I’d be pleased to encourage both approaches.