How scientific are public-opinion polls when they report, not how the people are going to vote—where the polls are generally right—but how people think: about labor, war, divorce, and so on? ROBERT COBB MYERS here examines the ambition of the pollsters to establish themselves as an accepted, accurate gauge of the public mind, so that they might serve as a guide to decision and action in the broad fields of public policy. He suggests that the difficulties involved in making the polls a respectable scientific yardstick of what people think are very far from being overcome.
_____________
Bored with election forecasting, both George Gallup and Elmo Roper, the two leading personalities in public-opinion polling, are seeking new worlds to conquer. Gallup last spring, while lecturing to a group of Princeton undergraduates, complained that it seemed downright silly to continue telling people the day before an election how they were going to vote the next day, but that he did not see how he could discontinue these predictions in the face of the huge popular demand for them. Roper, for his part, announced on September 9 in his New York Herald Tribune column that he was through publishing predictions on the outcome of this year’s presidential election unless something entirely unpredictable occurred between then and November 2. “I can think of nothing duller or more intellectually barren,” he wrote, “than acting like a sports announcer who feels he must pretend he is witnessing a neck-and-neck race that will end in a photo finish or a dramatic upset for the favorite—and then finally have to announce that the horse which was eight lengths ahead at the turn is still eight lengths ahead.”
These pronouncements by no means presage the imminent dissolution of Roper’s and Gallup’s organizations. Their attention has already been largely diverted to other matters. These other matters, in Mr. Roper’s words, include attempts “to discover the opinions that people hold on vital questions; to try to get back of this to the difficult, and sometimes hazy reasons why they hold their opinions; . . . to try to discover areas of agreement and of ignorance and how they vary by geographical location, size of place, economic rank, professional status.” These problems also interest Dr. Gallup, although he is somewhat less interested than Roper in why people say they think this or that on any specific question. And to Roper’s list Gallup would most emphatically add market research—the attempt to predict public behavior that affects the cash register—as a most challenging, and profitable, activity.
_____________
The history of election polling can be neatly divided into two periods. The era prior to 1936 can be thought of as the “catch-as-catch-can” period, that subsequent to 1936 as the “cross-section” period. The first period may be said to have begun in 1824 when the Harrisburg Pennsylvanian took a straw vote on the presidential chances of John Quincy Adams, and ended in 1936 when the Literary Digest predicted a landslide for Landon. The principal methods of obtaining opinions in the “catch-as-catch-can” period were to solicit responses by mail, or to print ballots in newspapers and magazines and ask readers to fill them out and send them to the sponsors of the poll. In some few instances opinions were obtained by interview. But in all cases the sample was haphazard.
The “cross-section” period dates from 1936, when both Gallup and Roper predicted Roosevelt’s election, Roper coming within one per cent of the popular vote, but Gallup underestimating Roosevelt’s popular support by seven per cent. Both had operated by having interviewers personally question a “scientifically selected cross-section” of the nation’s voters. Basing their selection on the long-known statistical theories or “laws” of Bernoulli, Pearson, and other mathematicians, they had chosen for questioning a few thousand people who roughly approximated a stratified random sample of the electorate.
Since 1936, although the method was constantly being refined and improved, the focus in public-opinion polling, as far as the public was concerned, remained on election forecasting. Now we are notified that the polls are ready, willing, and able to shift to a new focus—“opinion-casting” on questions of social, economic, and political importance which affect public policy. Their ambition is to guide public policy, through making the results of opinion polls available to elected and appointed government officials.
Gallup has enunciated the new purpose clearly (in A Guide to Public Opinion Polls, Princeton, 1948): “Great leaders will seek information from every reliable source about the people whom they wish to lead [italics mine]. For this reason they will inevitably pay more attention to facts about the current state of public thinking and of public knowledge. The public-opinion poll will be a useful tool in enabling them to reach the highest level of their effectiveness as leaders.” One theory of leadership and the democratic process implicit in Gallup’s statement—the adjustment of the statements, and perchance the policy, of candidates to presumed mass-opinion—can be seen in operation in every political campaign: what its consequences may be for the day-to-day functioning of a democratic society is another matter, which we will consider more fully later.
_____________
Assuming that issue polling can so crucially influence public policy as its advocates imply, it would seem proper that it should be undertaken by non-profit, tax-supported, or endowed agencies. After all, it claims to be nothing less than a scientific register of “the will of the people,” and even if it made no such claim it might well win acceptance as the ultimate sanction of any proposed policy. Yet, at the moment, there are only two non-profit agencies conducting nationwide public policy polls: the United States Department of Agriculture, and the National Opinion Research Center, which functions on Marshall Field Foundation funds and is attached to the University of Chicago. However, the American Institute of Public Opinion, which is jointly owned by Gallup and his sales manager, Harold Anderson, is today the most important single instrument in this field. It is organized and run, as is any private business corporation, for profit. Elmo Roper, Incorporated, much of whose work falls into the public-policy area, conducts the Fortune poll as well as undertaking other private contracts and commissions.
It is worth remarking that both Gallup and Roper have now extended themselves into the international sphere. Roper’s worldwide organization is known as International Public Opinion Research and is headed by Elmo Wilson, one of the co-authors of the Time magazine “Current Affairs Test,” and until recently research director for programs of the Columbia Broadcasting System. Gallup’s international organization consists of twelve nation-wide privately owned polls, in as many foreign countries, which interlock in their finances and direction with his AIPO. In addition, in the United States, there is a host of imitators at state and city levels—some commercial, some quasi-commercial, and some purely non-profit.
Essentially, Gallup rests his claim for the important role public-policy polling may play in public life on two main premises. The first of these is that the American Institute of Public Opinion is a scientific measuring instrument. The second, essential to bolster the claim to be a scientific yardstick of public opinion, maintains that “polling on issues is essentially the same as the problem of polling on candidates.” We will examine each of these in turn. We shall concentrate on Gallup’s organization in our discussion; but it will be clear that much of what we say is applicable to the polling process in general, as conducted by all the organizations now in the field.
_____________
Is the American Institute of Public Opinion a scientific instrument of measurement? To be considered scientific it is generally agreed that an experiment or claim must be repeatable, verifiable, and impartial.
Repeatability. A finding in order to be repeatable must present all the pertinent facts of the experiment. In the case of cross-section polling this means, for example, that when Gallup reports that 73 per cent of the people in the United States believe that Russia will start a war of aggression (reported last February), full data concerning his sample cross-section should be available, including how many people in each stratum and sub-stratum were questioned. To date, however, Gallup refuses to release details concerning these cross-sections, or to publicize the total number of people questioned in any particular poll, presenting his results in percentages only. He persists in this despite repeated warnings from reputable psychologists that the practice tends to put his reports outside the bounds of scientific respectability. In a monograph on Opinion-Attitude Methodology prepared in 1946 for the American Psychological Association, Professor Quinn McNemar of Stanford stated that this persistent failure “to give the number of cases in their over-all samples and in the sub-groups involved in breakdowns does not inspire confidence in the scientific value of their data.”
Nevertheless, it should be noted that if an investigator wants to take the trouble to travel to Princeton, New Jersey, he may, if he is considered to be “qualified,” examine the punched machine cards which are made up for each respondent in a national poll. Few scientists, however, will concede that Gallup has fully discharged his obligation to scientific procedure by this little publicized provision.
Verifiability. The only opinion poll that is ever actually verified in the strict sense of the term is the last poll just before an election. And there is some evidence to support the belief that the number of people questioned m this crucial last-minute polling is greatly in excess of the number questioned at any other time or in any other type of poll.
In sampling theory, however, a generally accepted means of verification is to take two or more samples and test the results to see whether or not it is probable that they were drawn from the same population. For us this means that if two polling organizations should poll on the same question, and come up with results that are essentially the same, we could say that they had verified each other in the sense that both their samples had probably been drawn from the same population or “universe.” However, even if both samples gave the same result, it might be possible that they both erred in the same way in misrepresenting the population. Therefore, not even sampling theory provides us with a means of verifying that a poll properly represents the population as a whole.
In this connection it is interesting to note the wide discrepancy existing in the following reports on a similar question by Gallup and Roper. The Roper report appears in the April 1948 issue of Fortune, and the Gallup report was released to his 125 client newspapers on March 17, 1948.
GALLUP (AIPO) | ROPER (Fortune) | |
Question: Do you think that prices, in general, will be higher, lower, or about the same six months from now? | Question: Do you think that the prices of most of the things you buy will be higher, lower, or about the same six months from now? | |
Higher | 14% | 54% |
Lower | 39 | 9 |
Same | 36 | 22 |
No opinion | 11 | 15 |
Because of the slight difference in question wording, are we to interpret this as meaning that most people think that prices in general will go down, but that the prices of things they buy will go up? That would seem to be nonsense. Other explanations seem more sensible: perhaps the two samples represent different populations, or the differences shown are differences caused by the instructions each organization gives its interviewers on the way it should ask questions, or by some other factor extraneous to the question itself. Which is correct? No one knows.
Impartiality. “Polling organizations,” writes Gallup in the 1948 edition of his A Guide to Public Opinion Polls, “can gain acceptance only by maintaining a position of complete impartiality”. It is now pertinent to review his record on this point of scientific respectability. In 1936, in 1940, and in 1944, the reports of the AIPO consistently overestimated the Republican vote and underestimated the Democratic vote. Gallup himself is a registered Republican. And, in explaining his 1944 results before the Congressional Committee to Investigate Campaign Expenditures of the 78th Congress, he admitted that his error lay in his hunch that a trend toward Dewey had set in which his poll had not caught—he had therefore “adjusted” the Republican opinion-poll results upward, and had reported this figure to his newspaper clients rather than the poll results as received from his interviewers. (Adjustments of various kinds are, of course, a necessary part of polling.) Furthermore, he complained that his sampling had been at fault—he had polled too many upper-income and too few lower-income voters. A group of impartial statisticians called in by the congressional committee agreed that Gallup’s explanations, while devastating, were correct.
_____________
In the case of the AIPO’s public-policy polling, too, professional psychologists have, through their own research, uncovered two areas in which AIPO reports have shown extreme partiality or bias.
Professor Ross Stagner of the University of Illinois investigated the pre-war Gallup and Roper polls dealing with the isolationist-interventionist controversy up to 1940. He published his findings in the August 1940 issue of Sociometry, and said, in part: “A comparison of the Gallup and Fortune surveys of American attitudes regarding intervention suggests that both policies give some advantage to the interventionist position in forming questions, but that the American Institute (Gallup) is considerably the worse offender in this respect. It seems certain that the publication of poll results giving a somewhat one-sided view of things, and with some over-estimation of the interventionist opinion, has facilitated a ‘bandwagon’ movement among persons previously undecided or mildly isolationist in attitude. . . . In the Gallup organization . . . there has been a tendency to seek dramatic, aggressive topics for poll questions and to state these in simple, positive form. . . . Undoubtedly . . . some unconscious bias in favor of intervention . . . has repressed the observation that a number of items showed obvious defects in construction and that these were extraordinarily likely to be of such a type as to over-estimate aggressive opinions.”
The second area in which AIPO bias in published reports has been demonstrated is that of labor-union organization. In the 1946-47 Winter issue of the Public Opinion Quarterly, Professor Arthur Kornhauser of Wayne University published the results of an exhaustive study he had made of all the questions dealing with organized labor and published by the leading polls from 1940 to 1945 inclusive.
Kornhauser’s conclusions were as follows: “Of the 155 questions examined, only 8 deal with positive or favorable features of unionism; 66 are neutral or doubtful; and 81 are concerned with union faults, activities the public condemns, or proposed restrictions upon unions. Three-fourths of all the AIPO (Gallup) questions, and about one-third of the questions from the other agencies are in the negative direction. . . . The odds run strongly against labor. . . . It is interesting to note that the questions which are asked repeatedly (notably by the AIPO) are almost all ones in the unfavorable category. They thus serve to reiterate and reinforce in the public’s thinking points against unions which are already condemned. . . . Polls on labor would often look very different if corresponding attitudes toward business were reported at the same time. But few, indeed, are the questions to bring out these faults of business.”
The “bandwagon” question. Bias may be particularly important in the case of public polling because of what has been called the “bandwagon” effect: the tendency of people to make up their minds in the direction of what they believe to be majority opinion.
This “bandwagon effect” has been shown to be not so important in the case of national elections, because voting opinion seems to be remarkably stable and long-run. Most people make up their minds early regarding the presidential candidate for whom they will vote, and keep them made up that way. In 1940 and 1944 Professor Paul Lazarsfeld of Columbia University’s Bureau of Applied Social Research questioned people about their voting intentions a month before the election, and then interviewed them after the election as to their actual vote. In both instances he found that 87 per cent actually voted as they had said they intended to vote a month before the election. And only two per cent of the total changed from one candidate to another. (The 1944 study is fully reported in The People’s Choice, New York, 1944.)
Published election forecasts seem to be effective opinion influencers only in the case of persons who have not already formed an opinion and who read and believe the poll’s report1 and who do not live in an area where majority opinion is at variance with national majority opinion. In the last case, two “bandwagon” effects operate and the local is likely to be more important than the national. In Alabama, for example, the undecided votes would probably be influenced less by the poll reports of Dewey’s national lead than the local majority in favor of Thurmond.
_____________
However, when we turn from election polling to public-policy polling we find an entirely different situation, particularly in the case of new issues. The general rule of social psychology which applies here is that people are suggestible to the induction of new opinion in direct relationship to the degree to which their opinion is not already crystallized on the issue. This means, for example, that if every newspaper in the land should espouse the views of Philip Wylie in derogation of the “Cult of Ma” it would have very little effect on the purchase of Mother’s Day cards; but that if the newspapers should feed us a steady diet of one-sided derogatory opinion concerning the people of “Zambodia,” it might well be decisive, since our Zambodian opinions are not only uninformed but zero.
A classic experiment along these lines was carried out in 1933 by Albert D. Annis and Norman C. Meier, and reported in 1934 in the Journal of Social Psychology. They prepared two series of editorials to be run in a student newspaper, the Daily Iowan, one consistently favorable to Mr. W. Morris Hughes (Prime Minister of Australia from 1915 to 1923), and the other consistently unfavorable. Both series indicated that Mr. Hughes was soon to visit the Iowa campus as a guest speaker. One group of students was given the unfavorable editorials to read, and another, the favorable. An opinion questionnaire about Hughes was then administered, and it was found that 98 per cent of the subjects reading favorable editorials were favorably biased, and 86 per cent of those reading unfavorable editorials were unfavorably biased. These induced attitudes persisted unchanged for at least four months.
Annis and Meier state: “As a general conclusion it may be observed that opinion can be induced by means of judiciously selected suggestions in as short a time as seven issues of a newspaper, even when [they should have said especially when] the person, institution, or question may be quite unknown at the inception of the series. . . . ”
Even so apparently neutral an act as asking a person’s opinion, if it is on a question about which he knows practically nothing, might change his “opinion” (that is, what he tells an interviewer). Leo Crespi conducted an experiment in Trenton, New Jersey, to determine what effect, if any, the interview experience itself might have on the opinions of respondents (Public Opinion Quarterly, Spring 1948). Crespi ran a standard poll in which he obtained names and addresses; then later, employing the pretext that the first questionnaires had been accidentally destroyed by fire, he re-interviewed these same people on the same questions. On the second interview he found that the number of “No Opinion” answers was significantly less than the number which had been given on the first interview.
Logically, it would appear likely that the “No Opinion” respondents might well shift preponderantly in one direction or another through the use of biased or “loaded” question wording. With one million people a year being asked questions about various matters, the total impact of opinion polling in actually affecting opinions may be sizeable. However, to deliberately set out to shift opinion in a certain direction by interviewing would be prohibitively tedious and expensive.2 The effect could be achieved much more efficiently and quickly by a direct publicity campaign in mass media of communication.
_____________
Is the problem of polling on issues essentially the same as the problem of polling on candidates?
In effect, Gallup tells us that since he can predict election winners he is scientifically qualified to measure the public’s temper on all sorts of questions. However, there are many points at which the problem of polling on issues is far more difficult and complex than the problem of polling on candidates for political office.
The cross-section. The proper cross-section for predicting the outcome of an election is a stratified sample of persons of voting age who are likely to vote. The characteristics of our voting population are fairly well known. In comparison to their numbers in the total population, the voting population includes more males than females, more older than younger persons, more wealthy than poor, and more educated than uneducated. Most Negroes in the South have to be subtracted.
Can this voting cross-section be used for polling on public policy questions? If not, what kind of a cross-section should be used? Why? These questions have not been answered, and cannot be answered until it is determined for what purpose public-policy polls are to be used, and what they are supposed to predict. If one believes that for all purposes of public-policy formulation every citizen’s opinion, no matter what his age or status, is just as important as every other citizen’s, then the question is simple: one simply uses a cross-section based upon the census. But, ordinarily, the matter is not considered to be quite so simple, and no practical politician could ever be convinced that the opinion of an illiterate migrant laborer on the question of raising or lowering the tariff, for instance, is as important as the opinion of a New York import-export broker.
Or, to use another example, are the opinions of women and oldsters regarding military conscription as important as those of young men? Some people in looking at this problem as a whole have thrown up their hands and concluded that if public policy polling is to be used as the basis for governmental decisions, it would probably be necessary to construct a different cross-section for every issue.
Knowledge of the issue. The question, “For whom are you going to vote for President of the United States?” is clear-cut and meaningful—at least to voters. This clear-cut and meaningful quality is not present, however, in most public policy questions. Here we are faced both with semantical and informational difficulties.
An apocryphal story is told that illustrates the semantical difficulty. An audience research poll asked “Do you think too many adjectives are used on the radio?” The responses supposedly were: 5 per cent “Yes”; 5 per cent “No”; and go per cent “What are adjectives?” Something like this can be cleared up by pretesting the question on a few people; but the matter can become somewhat more complicated. A writer in the magazine Tide (March 14, 1947) reported his experience with the question: “Are you in favor of or opposed to incest?” Forty per cent of the sample (size not stated) had no opinion, and 33.5 per cent of the rest were in favor. Another question was: “Which of the following statements most closely coincides with your opinion of the Metallic Metals Act: it would be a good move on the part of the United States; it would be a good thing but should be left to the individual states; it is all right for foreign countries but shouldn’t be required here; it is of no value at all?” Needless to say, the “Metallic Metals Act” does not exist; but seventy per cent selected one of these alternatives, the majority being in favor of leaving it to the states.
There are many occasions when, whether the question is understood perfectly by the respondent or not, he lacks sufficient information on which to form an opinion or make an intelligent reply. Yet, as the story about the “Metallic Metals Act” shows, it is more egoenhancing to exhibit an opinion on a topic than not to. Consequently, many polls blithely present the most complex issues to their cross-sections and come up with a set of answers labeled “Public Opinion.” The social utility of this sort of thing is certainly questionable. According to an AIPO report released to the press last February 18, 53 per cent of the American public thought that the Taft-Hartley Act should be revised, repealed, or left unchanged. The rest admitted they had not heard of it or had no opinion. Did 53 per cent of the American public last February really have sufficient information about the Taft-Hartley Act on which to base a meaningful opinion? Yet there it is. And this is just the sort of thing that people who “wish to lead” are invited to rely upon for their guidance.
Intensity of opinion. How deeply set is the opinion? How strongly is it felt? The difficulty in estimating the intensity with which an opinion is held does not present serious obstacles to election forecasting. Of two persons who both indicate that they are going to vote for Truman, one may feel ten times as strongly about it as the other. No matter; each has only one vote. However, this problem of intensity of opinion becomes very real when serious attempts are made to predict other than voting behavior. During wartime, for example, the administrators of domestic propaganda should be far more interested in the intensity with which the enemy is hated than in simply how many people say they hate the enemy.
Overt and covert opinion. Overt opinion is that which is communicated to somebody else. Covert opinion is that which is kept to one’s self. And there is no necessary correlation between the two. Fortunately for election forecasting there has, at least up until 1948, been but little tendency for people to mask or conceal their choice of candidate in responding to an interviewer’s questions.
_____________
A comparable degree of candor cannot be Expected to exist in all public-policy polling. Many publicly expressed opinions on Communism are probably stereotyped socially-acceptable opinions rather than a true reflection of covert opinions. The same might be said about the overtly expressed opinions of many Southerners toward the various issues that are euphemistically lumped under the symbol of “states rights.” Public-opinion polling probably would have to go to the lengths which Alfred Kinsey invented and employed in his studies of sexual attitudes and behavior in order to solve this problem of reliably transferring covertly held opinions into overt expression. And even then the pollsters could not be sure.
Symbolic vs. non-symbolic behavior. One of the several reasons why election forecasting has been so successful is that voting in an opinion poll and voting in an election booth are similar types of behavior. In both cases, all one has to do is “symbolize” one’s allegiance—in one case by the use of spoken words, in the other by marking an X in a box or by pulling a lever or pushing a button. If one wanted to predict from a poll how much money a party would raise, it would be no simple matter, as we would be trying to predict real behavior as distinguished from “symbolic” behavior.
Yet almost all market-research polling and much public-policy polling does try to do just this. For example, Life magazine recently proclaimed on its editorial page that Roper polls indicated that the majority of people would resort to the black market in the United States rather than again submit to rationing. This is getting into risky territory: Life has predicted, from what amounts to a vote against rationing, the implementation of this opinion by actual dealing on the black market.
The prediction of behavior at the non-symbolic level—the level of the real act—from behavior at the symbolic level is obviously bound up with the problem of gauging intensity of opinion. Nevertheless, there is more to the problem than this. Some persons may actually believe very intensely that they will do thus-and-so in a certain situation, only to find themselves incapable of so acting when faced with the actual situation itself. Professor LaPiere of Stanford, pointing up this problem, asks us to suppose that we offer one hundred thousand dollars to a person to eat a pound of raw human flesh. The person very intensely wants the hundred thousand dollars and very strongly believes that he will carry through his share of the bargain to obtain it. Yet when faced with the actual fare he faints dead away, being prevented from following through by his prior cultural conditioning.
It is on this point, too, as Drs. Flowerman and Jahoda pointed out in their article on polls on anti-Semitism in this department two years ago, that any attempt to predict actual anti-Semitic behavior from poll responses classified by pollsters as anti-Semitic must fall down. We may learn that so and so many per cent of the people think Jews have too much economic power: to predict then what they would do about it, if anything, is virtually impossible. Even when we ask them directly what they would do about it, we must remember the critical difference between checking on a card a possibility—“would you join a movement to reduce Jewish power?”—and actually joining a movement.
Question selection and wording. A final point which should be made in our discussion of basic differences between election forecasting and public policy polling has to do with the selection of questions. In election forecasting this matter presents no problem at all. Respondents are simply asked for whom they intend to vote. The question, one might say, is chosen for the pollster by the election he is attempting to forecast. But in public-policy polling we must realize that there are as many questions as there are issues, and there are as many issues as there are stars in the sky. Also, there is an infinite variety of emphasis and form and approach that questions may take toward any given issue. Public-policy polling thus presents us with this most important, and currently unsolved, problem: Of the myriad issues and myriad approaches to these issues, who selects the questions to be asked and reported upon, and rejects all others?
_____________
To sum up: A poll can ask people what they think about an issue, and then report what a sample of respondents say they think. If one has selected a proper cross-section of the total population it may even be possible to predict that, if everybody in the country were asked the same question, their division of responses as to what they say they think would be the same as the division of responses of the sample, allowing for a reasonable degree of sampling error. But what this opinion means—what it means to those who give it, and what it means when confidently stated as a percentage of “the American people”—this we do not know. To suggest that these reports of opinion, of whose meaning no one is sure, should become the firm basis for the conducting of a democratic society, is not science. It is dangerous irresponsibility.
In conclusion, let us assume that all the perplexing problems of measuring public opinion on any and all issues are eventually ironed out, and we have an Instrument whereby our political leaders may automatically assess mass opinion. What then? Should political leaders, both elected and appointed, ignore their presumably larger resources of information and wider experience, and abdicate their prerogative of choice of methods and timing in guiding affairs of state to blindly follow the “dictates of the people”? Or keep their views concealed until the people “come around” or are “brought around”?
The development of the perfect opinion poll, if such there ever be—and the use of the present-day polls as if they were perfect—presents us then with major problems in political philosophy. One aspect of these problems has been well put by Harwood Childs (An Introduction to Public Opinion): “Assuming that the voices of Gallup and the magazine Fortune are also the voices of the people, the question arises whether they are also the voices of God and should govern the acts of legislators. Are we prepared to have public opinion not only reign but also govern?” And Professor John C. Ranney (“Do the Polls Serve Democracy?”, Public Opinion Quarterly, Fall 1946), answering some criticisms of the role of polls in democracy, presents a very powerful one himself: democracy, he points out, is not the mechanical register of the will of the people. Essential to it is a process of discussion and consideration, and interplay between legislature and executive, and between different political interests, which gradually informs and brings out the implications of various proposals. “There is something not only pathetic but also indicative of a basic weakness in the polls’ conception of democracy in the stories of those who tell interviewers they could give a ‘better answer’ to the questions if only they had time to read up a bit or think things over. It is precisely this reading up and thinking over which are the essence of political participation . . .”
Gallup would no doubt retort to these criticisms, as he often has, with the statement that “Almost always the public is ahead of its legislators.” But the present writer feels, from his observation of public policy polling, that, rather, almost always the public opinion poll is ahead of the public. By selection of questions to emphasize and de-emphasize, by timing, often by “adjustments” inevitably influenced by the pollsters’ own point of view, the polls are more likely to be selling the poll-owners’ opinions as public opinion than public opinion itself.
The real issue at the present time would seem to be not what to do when the polls are perfect, if they ever can be: but what to do about their claim to be accepted as reliable expressions of the national will when they are so far from being that, on many counts.
_____________
1 E. F. Goldman of Princeton’s Office of Public Opinion Research reported in 1944 that approximately 44 per cent of our voters had never heard of the opinion polls, and only nine per cent claimed to read their reports regularly.
2 Samuel Flowerman and Marie Jahoda, writing about polls on anti-Semitism in the April 1946 COMMENTARY, estimated that at the most 150,000 people had been asked questions about anti-Semitism, and concluded that “even if the interviews were not innocuous and did create anti-Semitic feeling in a few cases where none existed before, which we doubt, the total number of those affected must be insignificant.”