Opinion Polls are survey research  of public opinion and aspirations  using probability sampling. "Probability sampling is the fundamental basis for all survey research. The basic principle: a randomly selected, small percent of a population of people can represent the attitudes, opinions or projected behavior of all the people, if the sample is selected correctly." The purpose: "The fundamental goal of a survey is to come up with the same result that would have been obtained had every single member of a population been interviewed." 
Ensuring a Good Sample
Gallup notes, "People generally believe the results of polls, but they do not believe in the scientific principles on which the polls are based. Most say "that a survey of 1,500 to 2,000 respondents -- a larger than average sample size for national polls -- cannot represent the views of all Americans."  Gallup then carefully explains how their sampling takes place and how, in interpreting the results, "several standard caveats" should be observed. The reason for these caveats (and not all polls meticulously observe these caveats, as noted below), is to make sure the sample is not skewed or biased.
Although opinion polls can be instructive in understanding where a populace is on any issue, issues such as our inability to see the questions asked or the demographic of who was asked the question can bias a poll's result.
For example - When "secret" polls are given out without our being able to see the data and scrutinize it, such as the "secret" poll which found that "Iraqis support attacks on British troops"  wholesale acceptance of the polling result is not merited by the public as a trustworthy result since we cannot verify the data which is behind the poll result. The polling numbers, questions asked of the respondents and demographics which were used are not included or referenced in how this "secret" poll was conducted in the above reference (hence the fact it is called a "secret" poll). The lack of open disclosure and the fact that it was used to discount the intelligence to that point which was quoted in the same article as, "The secret poll appears to contradict claims made by Gen Sir Mike Jackson, the Chief of the General Staff, who only days ago congratulated British soldiers for "supporting the Iraqi people in building a new and better Iraq"".. leads critics to conclude that this is not a fair or accurate assessment of Iraqi opinion. Also, the article states, "The findings differ markedly from a survey carried out by the BBC in March 2004 in which the overwhelming consensus among the 2,500 Iraqis questioned was that life was good. More of those questioned supported the war than opposed it." It also contradicts the optimism shown by the Iraqis in the polls taken (March 2007) quoted in the Conservapedia entry "Iraq War" under the headline "Iraqi Liberation."
It is, therefore, important to understand that to ensure an accurate polling result, the sample must be sufficiently representative to extrapolate.
Some areas which can invoke bias and skew the results of a poll
1) What was the Question?
How a question is phrased can prejudice the answer to a question or lead to a certain conclusion by the bias within the question. Examples include: "Have you stopped beating your wife yet?"
2) Who was asked the Question? - Demographics
If a certain set of people are asked a question, the bias within that populace can be used to say that the entire population believes what that one group believes. For instance, in Iraq, house to house respondents of several hundred to thousand would be expected to give a positive result when asked whether they felt Saddam was a good ruler of the country of Iraq if the respondent sample was living in Tikrit, Saddam's hometown; but in Northern Kurdistan in the areas which contain survivors who withstood his gas attacks and relocation efforts, the sampling of several hundred to thousand respondents would not agree with the Tikrit responses. And neither of these responses would be truly indicative of the response of the Iraqi people as a whole. Also, surveying the city of Baghdad about their views about how bad terrorism in their country is would not be representative of the country of Iraq as a whole since most of Iraq is not experiencing terrorist attacks, but the capital city of Baghdad is. The sampling base must not be only numerical but widespread enough to be indicative of the entire populace's views on any topic which claims to represent an entire country's view.
3) Factual, but a question leading toward a result.
In polls, such as political polls, the question can be biased or slanted toward a certain response even though it appears to be only going by known facts. This can happen within the question itself, or within the order in which the question is asked. This can be good for a poll taken by a party which is polling its own members where the bias is accepted, but if the leading question (or order) is asked of the population in general it could lead to a biased response. That is because psychologically human beings tend to be swayed toward joining with others in their opinions just for the sake of being counted among the popular opinion (people like to be on the "winning" side of an argument).
A good question for polling members within a party where the bias is accepted, such as asking in favor of supporting a policy to continue funding for the troops, could be phrased this way, "In light of the fact that a Los Angeles Times poll in January 2007 shows that only 25 percent of the public believes the new Democratic majority actually has a plan, and the fact that an Angus Reid poll taken in January 2007 says that 88% of the population believes Congress should allow funding for the troops currently there (combining the responses "allow funding for troops currently there" and "allow all funding for the war in Iraq" in the poll)  - Do you agree with the majority respondents in the USA Today poll taken in February 2007 which states that "6 in ten" oppose denying funding for the additional troops being sent to Iraq?" 
Democrats being polled with this question would be likely to think that bringing up their party's lack of a plan in the first part of the question prejudices the response about the subsequent statistics on troop support, whereas Republicans would not be likely to take equal offense. The responses could then be skewed because of party affiliation even though only proven and verifiable facts were being presented.
4) WHO you include or exclude from a poll.
Although it seems representative of the population when a poll is taken of a vast section of the population which is demographically and ideologically diverse, there can be other factors which can skew the results, such as age inclusion as shown in the poll taken by the New York Times that stated, "51% of Women Are Now Living Without Spouse."  Although that appears to be a statement concerning the entire population, The Times got their numbers from the Census Bureau's new American Community Survey, which surveyed "117 million women over the age of 15."  Is it really a surprise that millions of 15 year olds are "living without spouse"? What about 16, 17 or 18 year olds? It is worth noting that the age of consent  in the United States averages just over 16 years of age. In several states, including California, it's 18. Note also that the Census survey's "Data Profile Highlights" page  simply refers to "Female, Now married, except separated (population 15 years and over)," and does not call these respondents "women," as the Times did in their report of the poll.
5) Method of interviewing - (in person vs telephone vs mail vs online..)
People can be influenced by body language. Entire books are written on the topic. When a survey is done in person, certain attitudes within the survey takers could prejudice a result. Also, when hearing a voice on the phone, their attitudes or affiliation can influence the results. For instance, if Playboy Magazine were to do a survey of the populace concerning sexual attitudes using playboy bunnies for their survey takers on the phone (or those who sounded like they were and identified themselves as working for the magazine) and the same sample were asked the same questions by the Institute for Biblical Defense  using pastor's wives (or those who sounded like they were and identified themselves as working for that Institute), that could skew the results. When people know the bias of the organization they are speaking to and the results that organization would find acceptable, they may tailor their answers to reflect the opinions of those people. The relative anonymity of a mail-in or internet opinion poll sample could also allow for more responses which are not influenced by these factors.
6) Other factors
Gallup also noted  that insufficient sample size, the random selection technique used in creating the sampling frame and the execution of the sample were important standard and primary caveats.