Business News

Live Shows

Watch, like and share live events on AMSNEWS.TV. AMSNEWS.TV LIVE stream video and connect your event to audiences on the web and mobile devices.

                               Live Show  Oct 29, 2017

EZ Money Method

EZ Money Method Beware of CPA Scams

Is the EZ Money Method CPA marketing platform a scam or a legitimate system designed to help affiliate marketers that participate in cost per acquisition marketing.

 CPA marketing is a very lucrative, and cost effective method of marketing utilized by companies ranging from start ups to well established fortune 500 corporations.

The EZ Money Method system was developed to help duplicate CPA marketing success often enjoyed by experienced internet marketers who in most cases have to prove themselves to the CPA affiliate program they're seeking to do business with, and this "acceptance" method by the CPA affiliates often includes phone interviews, website, or blog traffic analytic review, and much more.

EZ Money Method has established an excellent CPA partnership with "MY PC Backup" which allows average internet marketers with little, or no experience the opportunity to earn vast commissions from the sale of the My PC Backup cpa offer.

The word "scam" is loosely used by those who seem to overlook the 4 "S" factors that are imperative for any business opportunity to succeed, for which MY PC Backup exceeds, and the EZ Money Method system exemplifies.

There are plenty of start up CPA affiliate offers that my be a scam for which I strongly urge you to beware, and stay away from, yet the "My PC Backup" is not one of them, nor is the powerful and innovative EZ Money Method cpa affiliate marketing training platform.

Content Marketing

5 Reasons why content marketing is a great idea for your business

Do you know what exactly a content marketing is? Content marketing is a marketing approach for creating and distributing relevant and valuable content to attract, acquire and engage a clearly defined targeted audience.

Doyou know what exactly a content marketing is? Content marketing is a marketing approach for creating and distributing relevant and valuable content to attract, acquire and engage a clearly defined targeted audience. The main object of content marketing is to drive lucrative customer action. This content can be in a variety of formats like news,videos, whitepapers, blogs, case studies, question and answer articles etc.

So how old is this content marketing? Content marketing history can be tracked long before in 1895 when John Deere first launched a magazine named Furrow.

 Thismagazine aimed to provide information to farmers on how to become more profitable. Since then the content marketing is continuously evolving which is depicted through below image.

 Why my organization needs content marketing? An organization needs content marketing to generate more leads, more traffic and more customers. On an average a company that starts their content marketing campaign achieves over 300% more growth than a company that does not.

Are you creating an awesome content for your customers and visitors? If not then here we present some of the best ways to change your brand image through content marketing.

 Boosts brand awareness

 Having a brand alone is not enough. Every company aspires to become a brand. It is some what like each actor aspires to become an icon like Tom cruise, Bradley cooper etc. Every famous person is not an icon. Similarly companies with a strong brand image indicate how consistent they have been in creating impression in mind of their customers.

This is possible by publishing related content that will make the brand familiar to people. Consumers are flooded with information. More and more, they are likely to ignore advertisements. Thus, your content must absolutely be relevant. It must be useful, interesting and targeted for a specific audience. You may create content for entertainment or educational purposes, or you can check various forum site to know what problem people facing and you can help them by writing a good blog on it. Always ask yourself, how are my customers benefiting from the information I provide? The content must reflect your brand’s image and be of interest to your customers without trying to sell anything. This will help you to be at the top-of-mind the next time people need your services.

Creating content that builds trust

 Brands must always think first that for whom they are writing and why they are writing. Trust is easier to build when the right reader is given the right content. You should first well define your targeted audience. Make a list of all of your target audience’s characteristics, for example, age, sex, income,education, marital status, geographic distribution, consumer habits. Now,create your content based on this information. If you audience wants to seek your content via email then make sure that you make it available to them through it. You should put the information/content in the place where the audience wants it to be.

Good relationship with customers can be established with content marketing. It’s not a one-time effort but a continuous commitment. You need to determine how often you will release new content. If there are only 8 electricians in a town, but only one provides helpful information that educates the community about electricity safety, problems and innovations then that brand would stand out as an expert. Since the consumer will be getting advice without paying it will help the brand earn a reputation of being trustworthy.

 Content that gives a feel to purchase a product

A customer will never buy a product by just giving a look at the image of it.They would be tempted to buy your product by reading the content description of the product. Customers hardly buy any product on the basis of its features. They buy because they perceive some benefits to those features. Your content should precisely define the benefits that a customer would get from your product/service. It is imperative to produce a content that guides a consumer through the buyer’s journey –awareness, evaluation and purchase results in more sales. A buyer will be motivated to go for purchase once he/she reads your content which can relate to their requirements.

Shareable content to promote your business

Interrelationship is essential to flourish any business. It is the connection between multiple people or groups among other people or groups. This interrelationship can be achieved through content sharing. Most of the time people are engaged in social media. So, social media is the best platform to share your business content. The more people who share content from your website to their social circles, the more people there are who will see your content. This in turn increases the number of visitors of your website in general. In addition, this act of sharing will increase the search ranking of your website. Periodically you should post your content on social media;your community is interested in. One of the primary advantages of social media is that it allows for real time communication and updates. In short, shareable content will make the town buzz like wildfire about your business.

 Fresh updated website content

One of the most common issues observed with websites is that the contents are not updated periodically. A stale website is less interesting and less professional than one with updated content that changes regularly. There are several ways of keeping the content of your website fresh like incorporating a blog marketing on your website, adding images and videos to your website, adding client testimonials and new menu pages in your website. One method of keeping the customer engaged is to maintain an RSS feed that is relevant to your industry.

As a result, a fresh updated website will always remain at the top of search list.This will increase the business of that organization as customer driven traffic would increase. Also a higher search engine ranking gives a good image and reputation to an organization.

With the help of these strategies precise and updated content can be developed which will help the marketers to generate more sales and attract customers to their websites.

Email Marketing

5 Email Marketing Errors To Be Avoided At Any Cost

Draft your emails well, get the message correct and avoid basic silly mistakes to make your emails really effective!

The dynamic world of marketing is changing even as you read this. New platforms are opening up to reach the customer; however, one method that has really stood the test of time is email marketing. No matter how advanced these new methods are, this simple method called email marketing is still proving to be extremely effective, and chances are that it will continue to remain so.

Undoubtedly, there was a lull phase of this method too but it is coming back with a band. In fact, a digital communication method study has revealed that as high as 55% of marketers have admitted to be retaining the faith in this efficient method. If we talk statistically; 108.7 billion corporate communications happened through emails on a daily basis and this is likely to increase to 139.4 billion per day by 2018. In simple words, this that each business user will receive around 140 emails in a day. Given this context, what should you do to keep your email content useful and attractive. Here are some tips on increasing the efficacy of your emails:

1.Providing Irrelevant Content to Readers

It is very easy to get carried away with new products and new topics that others may be talking about. However, what you need to analyze is what is it that your customer wants to read. Analyzing his reading behaviour through checking his social media accounts or posts in which he has participated is a good way to begin. You could also check the web traffic on your website and the most frequently read pages on your website. Once you get a hang of this, you could send the content relevant to your segmented data group through emails. If you are still a beginner at this, then rather than picking up complicated content like new advancements, politics and so on, it is better to stick to common but interesting content like holiday destinations, fitness trends and so on which may be relevant for most.

2.Lengthy Emails

A longer email has lesser chances of being read. There is also a high probability that this will land up in spam and have multiple issues with its formatting. So, if you really want your email to be noticed, then keep it short and concise. However, you could provide relevant links to the readers to read more about your product and services in case someone is interested to know more about your offerings. Also, always keep the subject like short and interesting and when you start the email, it is better to come straight to the point in the first 50 words. The maximum word limit of an email is 750 words, but tries to keep it information heavy and short.

3.Boring Text Arrangement

Consider a person checking 140 emails in a day! That is an exasperating amount right? So, if this is the case and you need to get noticed, it is important to keep your text arrangement proper. Avoid long essay kind of emails use as many bullet points as possible for conveying your message. Adding images to your emails will also have greater retention value and will also appear to be less boring. Keep in mind that the arrangement should be appealing the moment your reader opens up your email. Using blocks, headings, sub headings and colors also help in increasing the attraction quotient of your emails.

4.Problems in Formatting

Formatting is the most commonly ignored aspect of writing emails. Avoid usage of spam trigger words like 'Discount', 'Sale', 'Click here', 'Bonus' and so on. A rule says that if your email consists more than 200 such suggestive words, then there are charges that this may land up in the Junk folder. Another tip to be kept in mind is that if you draft your email in MS Word and then copy past it on your web window, there are chances that your formatting will go awry. The trick here is to use Notepad to draft mails. Do not forget to underline the fonts or links as this is the form of writing emails. Stick to proper grammar and avoid using lingos as it may make your email sound too casual.

As simple as these points may sound, the reality is that these points are what mostly get missed out. So, get your basics in place and write that email that really appeals to your readers. 

 

Health Fair Video

Stroke Awareness Healthfair

May is American Stroke Awareness month. Stroke is the 5th leading cause of death in the US. Stroke is the leading cause of disability in the US. During a stroke 1.9 millions cells die every minute without blood flow to the brain. This event will feature different vendors in healthcare who will provide stroke advice, health promotion, screenings, massage therapy, etc. Some of the vendors include Holy Cross Hospital Stroke program,  Prince Georges Hospital Stroke program, Washington Hospital Stroke Program, Vitamin Shoppe, It Works Global Products, American Heart Association, Center for Vein Restoration, Fitness expert, etc. Exercise classes will also be taking place at the same time. Our goal for this event is to further educate our community about Stroke. Attendees will be able to engage in discussions, exercise, and screenings to improve their overall health. There will also be giveaways and raffle prizes.

Sharon Harriston RN

 

Workforce Automation

Public Predictions for the Future of Workforce Automation

A majority of Americans predict that within 50 years, robots and computers will do much of the work currently done by humans – but few workers expect their own jobs or professions to experience substantial impacts

From self-driving vehicles and semi-autonomous robots to intelligent algorithms and predictive analytic tools, machines are increasingly capable of performing a wide range of jobs that have long been human domains. A 2013 study by researchers at Oxford University posited that as many as 47% of all jobs in the United States are at risk of “computerization.” And many respondents in a recent Pew Research Center canvassing of technology experts predicted that advances in robotics and computing applications will result in a net displacement of jobs over the coming decades – with potentially profound implications for both workers and society as a whole.

Two-thirds of Americans expect that robots and computers will do much of the work currently done by humans within 50 years but most workers expect that their own job will exist in its current form in five decadesThe ultimate extent to which robots and algorithms intrude on the human workforce will depend on a host of factors, but many Americans expect that this shift will become reality over the next half-century. In a national survey by Pew Research Center conducted June 10-July 12, 2015, among 2,001 adults, fully 65% of Americans expect that within 50 years robots and computers will “definitely” or “probably” do much of the work currently done by humans.

Yet even as many Americans expect that machines will take over a great deal of human employment, an even larger share (80%) expect that their own jobs or professions will remain largely unchanged and exist in their current forms 50 years from now. And although 11% of today’s workers are at least somewhat concerned that they might lose their jobs as a result of workforce automation, a larger number are occupied by more immediate worries – such as displacement by lower-paid human workers, broader industry trends or mismanagement by their employers.

Two-thirds of Americans think it’s likely that in 50 years robots and computers will do much of the work currently done by humans

Government, education and non-profit workers are slightly more skeptical about the likelihood of widespread workforce automationWhen it comes to their general predictions for the future of human employment and workforce automation, roughly two-thirds of Americans expect that within the next 50 years robots and computers will do much of the work currently done by humans. Some 15% of Americans expect that this level of automation will “definitely” happen, while 50% think it will “probably” happen. On the other hand, one-quarter of Americans expect that this outcome will probably not happen, and 7% believe it will definitely not happen.

In general, Americans of various demographic backgrounds have largely similar expectations regarding the future of automation. However, those under the age of 50 – as well as those with relatively high household incomes and levels of educational attainment – are a bit more skeptical than average about the likelihood of widespread workforce automation. Some 35% of 18- to 49-year-olds think it unlikely that robots and computers will do much of the work done by humans, compared with 27% of those ages 50 and older. And 37% of those with a college degree think that this outcome is unlikely (compared with 28% of those who have not attended college), as do 38% of Americans with an annual household income of $75,000 or more (compared with 27% of those with an annual household income of less than $30,000 per year).

Similarly, Americans who work in the government, nonprofit or education sectors are a bit more skeptical about the future of workforce automation than are Americans who work for a large corporation, medium-sized company or small business. Just 7% of Americans who work in the government, education or nonprofit sectors expect that robots and computers will definitely take over most human employment in the next 50 years, while 13% of those who work for a large corporation or small business or medium-sized company are certain that this will occur.

Despite their expectations that technology will encroach on human employment in general, most workers think that their own jobs or professions will still exist in 50 years

Yet even as most Americans expect significant levels of workforce and job automation to occur over the next 50 years, most of today’s workers1 express confidence that their own jobs or occupations will not be impacted to a substantial degree. Fully 36% of workers anticipate that their current jobs or occupations will “definitely” exist in their current forms five decades from now, while an additional 44% expect that their jobs will “probably” exist in 50 years. Roughly one-in-five workers expect that their current jobs will “probably not” (12%) or “definitely not” (6%) exist in their current forms that far in the future.

Overall there are relatively few differences in these expectations based on workers’ demographic characteristics, and the differences that do exist are relatively modest. For instance, younger workers are a bit more likely than older workers to expect that their current jobs will exist 50 years in the future: 84% of workers ages 18 to 29 expect that this will be the case, compared with 76% of workers ages 50 and older.

Workers in the government, education and nonprofit sectors, as well as those whose jobs involve manual or physical labor, have high expectations for the staying power of their current jobAnd as was the case for their predictions for workforce automation in general, workers in government, education and nonprofit sectors are a bit more confident than those in the private sector that their jobs will exist in their current forms 50 years from now: 86% of these workers expect that this will be the case (including 42% who indicate that their current jobs will “definitely” exist), compared with 79% of those who work for a large corporation, medium-sized company or small business.

Along with these differences based on place of employment, workers’ views on this subject also differ somewhat based on the type of work they currently do. For instance, 41% of workers whose jobs involve mostly manual or physical labor expect that their current jobs will “definitely” exist in their current forms in 50 years, as do 34% of those who describe their current occupations as “professional.” By contrast, just 23% of those who currently work in a managerial or executive role expect that their current jobs will exist unchanged for the next five decades. But overall, a substantial majority of workers across a range of categories express confidence in the long-term staying power of their current jobs or professions.

One-in-ten workers are concerned about losing their current jobs due to workforce automation, but competition from lower-paid human workers and broader industry trends pose a more immediate worry

Many Americans expect workforce automation to become much more prominent over the coming half-century, but relatively few of today’s workers see computers and robots as an imminent threat to their job prospects at the moment.

When asked about a number of issues that might cause them to lose their current jobs, just 11% of workers are at least somewhat concerned that they might lose their jobs because their employer replaces human workers with machines or computer programs. On the other hand, roughly one-in-five express concern that they might lose their jobs because their employer finds other (human) workers to perform their jobs for less money or because their overall industry workforce is shrinking. The most prominent concern is poor management by their own employer, albeit by a narrow margin, among the five evaluated in this survey:

  • 26% of workers are concerned that they might lose their current jobs because the company they work for is poorly managed.
  • 22% are concerned about losing their jobs because their overall industry is shrinking.
  • 20% are concerned that their employer might find someone who is willing to do their jobs for less money.
  • 13% are concerned that they won’t be able to keep up with the technical skills needed to stay competitive in their jobs.
  • 11% are concerned that their employer might use machines or computer programs to replace human workers.

Workers who perform physical or manual labor more concerned about a number of imminent job threatsWorkers whose jobs involve primarily manual or physical labor2 express heightened concern about all of these potential employment threats, especially when it comes to replacement by robots or other machines. Fully 17% of these workers are at least somewhat concerned about the threat from workforce automation, with 11% indicating that they are “very concerned.” By contrast, just 5% of workers whose jobs do not involve manual labor express some level of concern about the threat of workforce automation.

Telephone Surveys

What Low Response Rates Mean for Telephone Surveys

Telephone polls still provide accurate data on a wide range of social, demographic and political variables, but some weaknesses persist

A new study from Pew Research Center suggests that, after decades of consistent decline, U.S. telephone survey response rates have plateaued over the past four years. And contrary to the current narrative that polls are under siege, the data show that the bias introduced into survey data by current levels of participation is limited in scope. This report is the fourth in a series of Center studies tracking the impact of such changes in survey response (see previous studies in 1997, 2003 and 2012.) Among the key findings:

After decades of decline, the response rates for telephone polls like those conducted for Pew Research Center have stabilized in recent years to around 9%1,2 While the stabilization is good news for the industry, such low response rates do signal the potential for bias to creep into surveys if the people who consistently participate in polls are different than those who do not. That said, the current study and prior research suggest that response rate is an unreliable indicator of bias.

Telephone poll estimates for party affiliation, political ideology and religious affiliation continue to track well with estimates from high response rate surveys conducted in-person, like the General Social Survey. This provides strong evidence that decisions to participate in telephone surveys are not strongly related to political, social or religious attitudes. So even at low response rates, telephone surveys that include interviews via landlines and cellphones, and that are adjusted to match the demographic profile of the U.S., can produce accurate estimates for political attitudes.

Analysis of telephone survey respondents versus nonrespondents on variables from a national voter file suggests that survey participation is not strongly linked to partisanship. Affiliation with a particular political party does not appear to affect the likelihood that a person will participate in telephone polls, though those who participate in polls tend to vote more often than people who are less likely to take surveys.

There is no sign of an increase in nonresponse bias since 2012. On 13 demographic, lifestyle, and health questions that were compared with benchmarks from high response rate federal surveys, estimates from phone polls are just as accurate, on average, in 2016 as they were in 2012. The average (absolute) difference between the Center telephone estimates and the benchmark survey estimates was 2.7 percentage points in 2016, compared with 2.8 points in 2012.

On a range of demographic variables, differences between RDD telephone samples and the profile of all U.S. adults are relatively small with the important exception of educational attainment. In recent years, increasing the share of interviewing done with cellphones has improved representation of young adults and Hispanics. Like many survey organizations, the Center uses weighting to correct imbalances on major demographic variables (education, gender, race/ethnicity, region, age and more).

Telephone polls greatly overstate civic engagement, probably because of nonresponse bias.3As has been established in previous work, the people who answer surveys are likely to be the same people that are involved in their community’s public life – they are joiners. Fortunately for pollsters, civic engagement is not strongly correlated with political attitudes or most other measures researchers attempt to study with surveys.

Telephone polls also overstate political engagement, but to a lesser extent. Both the benchmarking and the voter database analysis show that politically engaged adults are overrepresented in surveys. The magnitude of bias on political engagement tends to be in the single digits (e.g., telephone polls overstate the share of adults who are registered to vote by about 7 percentage points) whereas the magnitude of bias on civic measures is in the double digits (e.g., phone polls overstate the share of adults who participated in a sports or recreation organization in last year by about 16 points).

The finding that a low response rate leads to substantial bias on some topics (e.g., volunteering) but not others (e.g., partisanship or religious affiliation) underscores the importance of having high response rate in-person surveys, which make such knowledge possible. Without surveys like the Current Population Survey, American Community Survey, the American National Election Survey and the General Social Survey, it is significantly harder if not impossible for researchers to determine where biases exist or do not exist in low response rate public opinion polls.

At first glance, the above results might seem to fly in the face of perceptions that polls failed in the 2016 presidential election. Indeed, there were some large errors in critical Upper Midwest states and those polls fed expectations that Hillary Clinton would win the presidency. But such a synopsis overlooks the fact that national polls were actually quite accurate. Collectively, they indicated that Clinton had about a 3 percentage point lead nationally, and they were basically correct, as she ultimately won the popular vote by 2 points. Furthermore, according to a new report, there are clear reasons why national polls as a group fared better than state polls. For instance, national polls were much more likely than state polls to adjust for respondent education level in their weighting, which proved critically important in the 2016 election. In sum, while polling errors did contribute to the false expectation that Hillary Clinton would win the presidency, polling writ large was not broken in 2016, and researchers have identified factors that help explain why some polls performed better than others.

The current study uses two types of data to assess the representativeness of Pew Research Center phone surveys. Most of the results are based on a comparison of survey estimates to widely accepted benchmarks from government-conducted or government-funded surveys that have far less nonresponse than standard telephone surveys. Altogether, a total of 29 benchmark measures were compared with identical (or nearly identical) questions asked on telephone surveys conducted by Pew Research Center.

A second source of data is a national database of adults that includes information about voter registration, turnout and partisanship on the vast majority of U.S. households. This particular dataset4 is one of a class of commercial products known as voter files that are widely used by campaigns and others to contact voters and leverage the fact that states are required to keep lists with the names, contact information and turnout history of residents who are eligible to vote. These voter file data were matched with the telephone sample used in a 2016 Pew Research Center survey to provide information on a group that otherwise prove difficult to examine: those who choose not to respond to the phone survey. This was accomplished by taking the 40,182 working telephone numbers called for a Pew Research Center survey and using those numbers to match people’s voting information, as many people have their phone number listed on their registration record. In this way, the survey’s respondents and nonrespondents were compared on several political measures to see if and where they differ.

A final point worth emphasizing is that live interviewer phone polls now represent a minority share of all polling conducted in the U.S. Online polls and automated (Interactive Voice Response) polls, or combinations of the two, are collectively more common than live interviewer phone polls and tend to have significantly lower response rates. This means that the findings presented in this report speak to only one part of the overall polling landscape, though it is an important part. Polls with methodologies similar to those of Pew Research Center continue to be conducted by major newspapers, broadcast networks, cable news organizations, universities and Gallup.

What is nonresponse bias?

The term bias can conjure up the thought of prejudice against certain kinds of people or a conscious effort to be unfair. Surveys can be biased in this sense if, for example, the questions are designed to favor one side of an issue. But when survey researchers and statisticians use the term, they mean something more general.  In this case, bias is error that occurs when something about the way a survey is designed or conducted leads to results that are systematically different from what is true in the population. This is in contrast with what’s commonly called “sampling error” — the kind of error that occurs by chance because surveys try to interview a random sample of the population. The term bias, as used in this study, does not favor a particular group or point of view or result from conscious effort on the part of the researcher.

This report focuses on nonresponse bias in particular, which occurs when the kinds of people who are contacted and who agree to participate in a survey are systematically different from those who can’t be contacted or who refuse to participate. For example, younger people may be harder to reach for an interview. This would mean that those who are interviewed will tend to be older than the population as a whole. In turn, for questions that are strongly related to age, the results will overrepresent the attitudes and behaviors of older people if an effort is not made to correct the bias.

Concern about nonresponse bias has grown as nonresponse rates have grown. But it is important to note that survey researchers have been concerned about this issue for as long as modern survey research has existed. And methods of correcting for nonresponse bias are well understood and widely used. In particular, nearly all high quality surveys (including Pew Research Center’s) use some form of statistical weighting to ensure that their samples conform to the population with respect to geography, age, education, gender, race and other characteristics. However, nonresponse bias can still occur if respondents and nonrespondents differ on some dimension that is not accounted for in weighting. This report is an effort to measure and document the nature and extent of nonresponse bias in RDD telephone surveys of the sort conducted by Pew Research Center.

Low response rate phone polls still conform closely to high response rate in-person surveys on measures of political and religious identification

Some concepts are so fundamental to understanding public opinion they are measured in survey after survey. These concepts include political party affiliation, political ideology and religious affiliation. Several high response rate in-person surveys,5 as well as many low response rate polls, routinely ask these questions so researchers can study how they relate to policy attitudes and other outcomes. The availability of benchmarks from in-person surveys provides an opportunity to evaluate whether or not phone polls are still accurate in this era of single-digit response rates.

If the narrative that polls are broken is correct, one place it is likely to manifest is in trend-lines for these fundamental concepts. Specifically, two trend lines (one line for the benchmark high response rate survey and one line for the telephone poll) which used to be similar back in the 1990s or early 2000s, for example,  perhaps would have been expected to diverge by 2016, as single-digit polls are no longer capable of producing unbiased estimates. This scenario is not borne out in the data.

This study compared Pew Research Center and General Social Survey (GSS) trends. Because of differences in question wording and mode of interview, these comparisons are not always precise. In addition, GSS estimates come from individual surveys with a modest sample size. Center estimates represent all of the interviewing conducted for the year, which for common questions involved combining multiple polls and taking the yearly average. This was done to minimize the role of sampling error.

Across the quarter century span for which comparisons are available, the GSS and Center phone polls produced very similar estimates of the share of American adults identifying with the Democratic or Republican parties. Both sets of surveys measure party affiliation with a simple question asking respondents whether they are Democrat, Republican or independent.6 The average difference in point estimates across the high and lower response rate surveys is 1.4 points for Democratic identification and 1.6 points for Republican identification.

The relationship between the shares identifying with each party is also similar across the high and lower response conditions. Nearly every year of the comparison has  more Democrats than Republicans in both surveys (though the difference between the parties is not always statistically significant), and the Democratic advantage has risen and fallen in the two surveys in parallel fashion. Both surveys found the public more evenly divided between the parties in the early 2000s than in the period since 2008. In 2016 – the most recent year for which a comparison is available – the GSS finds Democrats outnumbering Republicans by about 9 percentage points, while Pew Research Center finds Democrats ahead by 7 percentage points. On the whole, this analysis suggests that – despite low response rates – telephone surveys are able to produce accurate readings of the partisan composition of the American public. A different approach to this issue (presented later in the report) compares telephone respondents to nonrespondents using voter file data and reaches the same conclusion.

Beyond partisan affiliation, the portrait of Americans’ ideological orientation is also very similar in the GSS and in Pew Research Center surveys. Pew Research Center’s question offers respondents five categories ranging from “very liberal” to “very conservative,” while the GSS shows respondents a fully labeled seven point scale ranging from “extremely liberal” to “extremely conservative.” In both questions, “moderate” is the middle option.

Despite the differences in how the categories are presented to respondents, there is a reasonably close correspondence between the Center’s surveys and the GSS with respect to the relative shares choosing “moderate” or the conservative or liberal options. Both surveys consistently find more self-identified conservatives than liberals, and both show a gradual uptick over time in the share identifying as liberal. The number describing themselves as moderate is similar in the two sets of surveys (e.g., 34% in Pew Research Center’s 2016 surveys vs. 36% in the GSS that year). Conservatives outnumbered liberals in Pew Research Center surveys in 2016 by a margin of 36% to 25%, while they did so by a margin of 33% to 27% in the GSS.

Like politics, religion is a subject of great interest to many people. But for a variety of reasons, the government collects almost no information about the religious affiliation, attitudes and behavior of the public. In fact, in 1976 Congress prohibited the U.S. Census Bureau from asking about religion in its mandatory surveys such as the decennial census.7 However, the GSS has measured religion since its inception.

Despite using different questions, Pew Research Center surveys and the GSS produce similar findings both in respect to the levels and trends in key religious activities and beliefs. Perhaps most fundamental is what scholars of religion often call “belonging,” or religious affiliation. Pew Research Center surveys track religious affiliation on a nearly monthly basis using a relatively straightforward set of questions that first offers respondents a choice among 12 categories such as Protestant, Catholic, Jewish, atheist and then follows up with more detailed questions. The GSS question series on religious affiliation is similar, though it offers fewer options with its initial question (Protestant, Catholic, Jewish, some other religion or no religion).

The portrait of religious affiliation painted by the two surveys is highly similar, both with respect to the shares of the public associating with major religious traditions and with the trends over time. Both find the percentage of adults who are unaffiliated with a religion (either as atheist, agnostic, or no religion) growing rapidly in the past decade and constituting more than one-fifth of the public in 2016 (23% in Pew Research Center surveys, 22% in the GSS). Similarly, both find affiliation with Protestantism in fairly steady decline over the time period examined.

Both surveys show a slight decline in the Catholic share of the population in the past decade or so. Pew Research Center surveys in 2016 found Catholic affiliation at about 21%, while the Catholic share in the GSS is about 23%, a difference that is not statistically significant.

Caveats about benchmarks

Assessing bias in surveys requires an objective standard to which survey findings can be compared. Election polling has such a standard, at least for measures of voting intention: the outcome of the election. But most benchmarks are taken from other surveys. The benchmarks used here are drawn from government-funded surveys that are conducted at considerable expense and with great attention to survey quality. But they are surveys nevertheless and subject to some of the same problems facing the low response rate telephone surveys examined here.

The surveys used as benchmarks in this report have high response rates – on the order to 60% or more. Accordingly, the risk of nonresponse bias is generally thought to be lower for these surveys, though it still exists. Also relevant is the fact that all surveys, no matter the response rate, are subject to measurement error. Questions asked on government-funded surveys are carefully developed and tested, but they are not immune to some of the factors that create problems of reliability and validity in all surveys. The context in which a question is asked – the questions that come before it – often affects responses to it. Similarly, all survey items may be subject to some degree of response bias, most notably “social desirability bias.” Especially when an interviewer is present, respondents may sometimes modify their responses to present themselves in a more favorable light (e.g., by overstating their frequency of voting). All of these factors can affect the comparability of seemingly identical measures asked on different surveys. Assessing the quality of data is an inexact process at best. It is therefore important to bear in mind that benchmarking provides measures of estimated bias and is highly dependent on the particular set of measures included.

Telephone surveys somewhat overrepresent politically engaged adults, but the bias has been fairly stable over time

Previous research indicates that surveys tend to get a disproportionate response from those who are active participants in electoral life. One key question is whether that bias has grown larger as response rates have settled into the single digits. To address this, the study leveraged a long trend from the Current Population Survey (CPS), which has an 87% response rate.

Both the CPS and Center phone surveys regularly measure the proportion of U.S. adults who are registered to vote.8

The data show a basically stable trend in over-estimating voter registration in telephone surveys from 1996 to 2014 (the last year for which the government data is available). Specifically, the observed overestimation is similar in 2014 – a year in which the phone surveys had a nine percent response rate – and 1996 when there was a 37% response rate. This is evidence that falling response rates are not resulting in dramatically different survey respondents when it comes to voter registration levels.

The CPS also provides benchmarks for two other measures of political engagement: frequency of voting in local elections and contacting a government official in the past year. Unlike voter registration (a staple of political surveys), Pew Research Center asks these questions only on rare occasion, which means that there is no analogous trend line to consider. To support this study, the Center did, however, administer the questions about local voting and contacting an official in a 2016 national telephone survey.

A comparison of the Center data to the most recent CPS data show the telephone survey samples to be more engaged than those in the government surveys, though the size of this bias varied from roughly 5 to 15 percentage points. When it comes to regularity of voting, the CPS indicates that 32% of adults vote in all or almost all such elections. The telephone survey estimate is 37%, an overstatement of about 5 percentage points.

A larger bias appears on the question about contacting elected officials. The CPS found that 10% of adults had contacted or visited a public official at any level of government to express their opinion. In contrast, the telephone survey estimate is that 25% have done so.

The most up-to-date comparison on voter registration status is from 2014 (the most recent election year with data available from the CPS). The benchmark indicates that 63% of eligible adults were registered to vote, while the comparable Pew Research Center estimate for three surveys conducted around the election was 70%, an overstatement of about 7 points.

Large biases persist on civic and social engagement measures

The results are substantially less positive with respect to measures of civic and social engagement. As was documented in the Center’s 2012 report on survey nonresponse, the new study suggests that telephone surveys continue to over-represent people who say they have volunteered, worked to solve a neighborhood problem, belonged to a community, recreational or civic association and people who say they trust or regularly talk with neighbors.

The biases range from a whopping high of 38 percentage points on working with neighbors (8% in the Current Population Survey vs. 46% in the telephone survey) to a low of 9 percentage points on participating in a civic or service association (6% in the CPS vs. 15% in the telephone survey).

The source of these large biases is reasonably well understood based on research published by Abraham, Helms and Presser in 20099 (and more recently by Amaya and Presser10 that examined reported rates of volunteering and other civic behaviors in the CPS (the same survey used as the benchmark in the present analysis). By comparing the survey respondents who subsequently completed an additional special survey with those who did not, the authors showed that the overall rate of volunteering was sensitive to survey nonresponse. In their conclusion, they observe that participating in surveys is a prosocial behavior related to other kinds of behaviors such as volunteering. They write that “our findings suggest that there is an important element of altruism in the decision about whether to respond to a survey request.”

While the implications for accurate measurement of prosocial behaviors in low response rate surveys are troubling, our 2012 analysis of this issue found that such behaviors – volunteering in particular – were not highly correlated with most other survey topics of interest. A similar analysis conducted with 2016 data found that the overrepresentation of civically engaged adults in phone surveys, if anything, increases support for both the Republican Party and Donald Trump. Among non-Hispanic whites, the share identifying as Republican or leaning to the Republican Party was 55% among those volunteering in the past year, as compared to 45% identifying as Republican or leaning Republican among non-volunteers. In theory, weighting down volunteers to bring the phone survey data in line with the CPS target would reduce the civic engagement bias, but would not necessarily make other survey estimates more accurate. In 2016, such an adjustment would have likely exacerbated the extent to which some telephone surveys overstated support for Hillary Clinton. That said, this finding that overrepresentation of volunteers favor Republicans is, at present, based just on one survey. Additional testing in other surveys would help to determine how robust this pattern is.

Telephone estimates generally show little bias from nonresponse on lifestyle, health and demographic questions

Across 14 measures of demographic and personal characteristics (temporarily holding aside variables used for weighting Pew Research Center surveys, which are discussed in a later section), the average difference between the government estimate and the Center’s survey estimate was 3 percentage points, and varied between 0 and 8 points. The largest difference was seen on a measure asking respondents about their health status. The government estimate of the share of people who rate their health as excellent or very good is 59%, while the telephone survey found 51% doing so.

The other 13 personal and demographic items were quite close to the benchmarks. The Center’s telephone survey overstated the share who received food stamps in the previous year by 4 percentage points and the share who received unemployment compensation also by 4 percentage points.

The remaining demographic and personal measures miss the benchmarks by 3 percentage points or less. These include measures of family income, employment status, household size, citizenship, health insurance, length of residence at current address, marital and parenthood status, smoking, place of birth among Hispanics and having a driver’s license. In other words, on all these measures, the relatively lower response telephone survey provided a measure of the phenomenon nearly identical to that of the high response rate government survey used as a benchmark.

The accuracy of telephone survey data on lifestyle, health and demographics is at least as high as four years ago, especially for estimates based on young adults

Researchers are, of course, interested in the extent to which nonresponse bias is getting worse over time. Comparing current levels to those measured roughly four years ago, we find that, on average, the accuracy of telephone survey data is at least as high as it was in 2012. On 13 demographic, lifestyle and health questions11that have high response rate benchmark survey data, the average (absolute) difference between the Center telephone estimates and the benchmark survey estimates was 2.7 percentage points in 2016, compared with 2.8 points in 2012.

In general, accuracy as measured by these benchmarks was lower among certain demographic groups such as young adults and minorities. But there is no indication that the biases within groups got worse over time. Most major subgroups – defined by age, gender, race or education – saw the accuracy of their estimates from phone surveys either stay level or slightly improve. For example, across the 13 lifestyle, health and demographic questions with high response rate comparisons, the average difference between the Hispanic adult estimate from Pew Research Center phone surveys and the same Hispanic estimate from the benchmark survey was 5.5 percentage points in 2016 as compared with 6.7 in 2012. The average change in accuracy from 2012 to 2016 was similar for non-Hispanic white. For blacks, however, their phone survey estimates differed from benchmark survey estimates by an average of 5.1 percentage points in 2016 versus 4.2 points in 2012.

Of the groups analyzed, young adults (ages 18 to 29) was the subgroup that saw the accuracy of its phone survey estimates improve the most. In Pew Research Center surveys conducted in 2012, the absolute average difference from the benchmarks reviewed was 6.5 percentage points. In surveys in 2016, the difference from benchmarks on the same set of questions was 4.7 percentage points. This change suggests a small improvement in the quality of the data collected among adults ages 18 to 29 in low response rate phone surveys in 2016 compared with 2012.

Educational attainment is another key demographic grouping we are often interested in. For adults with a high school education or less, their phone survey estimates differed from benchmark survey estimates by an average of 5.3 percentage points in 2016 versus 4.4 points in 2012. The change from 2012 to 2016 in the average difference from benchmarks was less dramatic for adults with higher levels of formal education.

Trends in the composition of survey samples over time

While random-digit-dial survey samples – designed to randomly select respondents and thus create a representative cross-section of a population — start out as generally well balanced, certain patterns of demographic bias quickly creep in once the calling process begins. These are well known by now, which is why Pew Research Center and many other pollsters adjust for demographic imbalances in weighting. The question is whether they are getting worse over time. For the most part, this analysis suggests the answer is no.

Because they are more willing to talk to pollsters, better educated individuals are overrepresented in most telephone surveys, while some racial minorities – especially those for whom English is not their native language – are often underrepresented. The youngest adults are harder for surveys to contact and interview, and city-dwellers (for a variety of reasons having to do with lifestyle and demography) are also more elusive.

To combat these biases, surveys have long relied on statistical weighting as a corrective for known issues around the key set of demographic variables encompassing things like race and ethnicity, age, gender and educational status. Samples are compared with government benchmarks on these core demographic variables and the data are adjusted so that the samples conform to the population. A more complete discussion of weighting can be found here. Surveys with lower response rates may be more subject to these types of biases. But these biases are also a function of aspects of the survey design, such as the inclusion of cellphones in the sample (which helps in reaching a younger, more ethnically diverse segment of the population) or how respondents are selected within households that are reached.

To the extent that declining response rates may be creating more nonresponse bias in surveys, the weighting applied to fix the biases must become more aggressive. This comes with a cost, since weighting also results in some loss of precision in the sample and to smaller effective sample sizes. To assess how the biases in core demographic characteristics are changing over time, the unweighted demographic composition of Pew Research Center samples was compared with government benchmarks on four key variables across a quarter century.

The adjacent graphics illustrate these trends. Each graph plots three lines for each category of one of the variables of interest. For example, the left panel of the first graph shows the share of the sample that is age 18 to 29. The light blue line is the unweighted share of Pew Research Center samples year by year from 1992 to 2016. The gray line is the benchmark for this age group, computed from U.S. Census surveys. The dark blue line is the Pew Research Center trend after weighting has been applied. It should closely match the benchmark line). As the graph illustrates, surveys during the middle years of the last decade underrepresented young adults in this age group, a phenomenon driven by the rapid adoption of cellphones. After Pew Research Center began adding cellphones to its telephone samples in 2007, the shortfall in young adults began to lessen and accuracy on this variable has continued to improve.

As the graphics demonstrate, the magnitude of the observed biases for most variables has varied over this time period. But in general, the biases are not significantly greater than in the past. And samples for age, race and ethnicity have improved in quality in recent years as telephone samples have included more cellphones and incorporated other design changes, such as the routine inclusion of Spanish language interviewing.

Properly representing the population with respect to educational attainment remains perhaps the greatest challenge for low response rate surveys. Pew Research Center surveys, and those like them, have consistently overrepresented college graduates and underrepresented those with a high school education or less over the period examined here. The magnitude of this imbalance has crept upward in the past few years, going from an average of 8 to 10 points during the 1990s and 2000s to 12 to 15 points since 2012. At the same time, the shortfall in non-college individuals has been relatively consistent, averaging about 10 points since the 1990s. As discussed above, weighting helps to correct these imbalances. Still, it would behoove survey researchers to find ways to close this education gap at the data collection stage rather than relying on weighting to fix it.

Phone survey respondents and nonrespondents have similar political profiles based on voter file data; respondents vote more often

In the world of surveys, nonresponse bias becomes a problem when the roughly 90% of people that don’t participate in any given poll are meaningfully different than the 10% who do. One way to find out if this is happening is to learn more about the people who don’t answer. Unfortunately, that is a difficult task, since for any given survey the so-called “nonresponders” have not provided any information to pollsters. However, a developing data resource – large, commercially available national databases of adults, their voter registration status and their voting histories – provides a window into many of these survey nonresponders.

To leverage this, cellphone telephone numbers that were sampled for the survey – numbers for both people responding to the survey and those not responding – were matched to a large national database, called a voter file, that contains information about voter registration, turnout, party registration and estimated partisanship among most U.S. adults. Out of all 31,412 cellphone numbers, 7,698 were a unique match against the voter file. Of these, 630 were respondents and 7,068 were nonrespondents, comprising 22% and 19% of all respondents and nonrespondents respectively.

The upshot: among these uniquely matched cases, a comparison of respondents and nonrespondents survey finds the respondents more politically engaged than nonrespondents but nearly identical in terms of partisan loyalties, a result very similar to that seen in a comparable analysis in 2012.

Respondents were slightly more likely than nonrespondents to be registered to vote (85% among respondents vs. 81% among nonrespondents) and to have voted in the 2012 elections (62% vs. 52%). The overstatement in 2014 turnout – voting in the off-year elections that do not include a presidential contest – is considerably larger (49% among survey respondents vs. 33% among those who didn’t respond). The voter file also includes a measure of the likelihood of voting in 2016,where a score of 0 is very unlikely to vote and 100 is very likely. The mean score for survey respondents was 77, while for nonrespondents it was 69.

By comparison, there is no evidence of partisan bias in the sample. Those who participated in the poll look quite similar to those who did not with respect to partisan affiliation. The voter file includes an imputed partisanship score that varies from 0 (most Republican) to 100 (most Democratic). The mean partisan score for both respondents and nonrespondents is 58. Similarly, the voter file record of party registration shows no bias as well: registered Democrats were 20% of respondents and 20% of nonrespondents; the comparable figures for registered Republicans were 14% and 13%.

Although restricted to only those cellphone numbers that were uniquely matched to the voter file, this analysis – using a very different approach – points to the same conclusion as the GSS trend analysis at the top of the report: despite low response rates, well-designed and carefully weighted telephone surveys still produce accurate information about the political profile of the American public.

Advertisement

Vaseline® believes that truly healthy skin starts with deep healing moisture. It’s not something you get by masking problems or through quick fixes.

About Science

A Look at What the Public Knows and Does Not Know About Science

Before you read the report, test your science knowledge by taking the interactive quiz. The short quiz tests your knowledge of questions recently asked in a national poll. After completing the quiz, you can compare your score with the general public and with people like yourself.

Take the Quiz

A Snapshot of What Americans Know About ScienceA new Pew Research Center survey finds that most Americans can answer basic questions about several scientific terms and concepts, such as the layers of the Earth and the elements needed to make nuclear energy. But other science-related terms and applications, such as what property of a sound wave determines loudness and the effect of higher altitudes on cooking time, are not as well understood.

Most Americans (86%) correctly identify the Earth’s inner layer, the core, as its hottest part, and nearly as many (82%) know uranium is needed to make nuclear energy and nuclear weapons.

But far fewer are able to identify the property of a sound wave that determines loudness. Just 35% correctly answer amplitude, or height. Some 33% incorrectly say it is frequency and 23% say it is wavelength. And just 34% correctly state that water boils at a lower temperature in a high-altitude setting (Denver) than near sea level (Los Angeles).

Fully 73% of Americans distinguish between astronomy and what is commonly considered a pseudoscience: astrology. Twenty-two percent of Americans incorrectly say that astronomy not astrology – is the study of how the positions of stars and planets can influence human behavior. Another 5% give some other incorrect response.

How much Americans appear to know about science depends on the kinds of questions asked, of course. Science encompasses a vast array of fields and information, and the questions in the new Pew Research survey represent a small slice of science knowledge. On Pew Research Center’s set of 12 multiple-choice questions – some of which include images as part of the questions or answer options – Americans gave more correct than incorrect answers; the median was eight correct answers out of 12 (mean 7.9). Some 27% answered eight or nine questions correctly, while another 26% answered 10 or 11 items correctly. Just 6% of respondents got a perfect score.

These findings come from Pew Research Center’s American Trends Panel, a nationally representative panel of randomly selected U.S. adults. The survey of 3,278 adults (including 2,923 adults online and 355 respondents by mail) was conducted Aug. 11-Sept. 3, 2014.

Why science knowledge matters

A variety of scholars have argued that public understanding of science issues and concepts is a hallmark of an informed public.1 As developments in science and technology raise new issues for public debate – from driverless cars and space exploration to climate change and genetically modified crops – a public with more knowledge of scientific facts and principles is often seen as one better able to understand these developments and make informed judgments.2

One major avenue for science learning is through the schools. But neither the public nor those connected to science have strongly positive views about America’s science and technology education. A 2015 Pew Research report found that the general population and members of the American Association for the Advancement of Science (AAAS) both see U.S. K-12 education in science, technology, engineering and mathematics (STEM) fields as “average” or “below average” compared with other industrialized countries.

A minority of 29% of Americans and 16% among AAAS members consider the country’s K-12 STEM education to be among the best in the world.

Moreover, 84% of AAAS members consider Americans’ limited knowledge about science to be a major problem for the scientific enterprise. Further, most of the AAAS members say that too little STEM education is a major reason that the public has limited science knowledge.

Those with higher education levels are more likely to know answers to questions about science. There are also times when gender, age, race and ethnicity matter.

There are substantial differences among Americans when it comes to knowledge and understanding of science topics. In the new survey, education proves to be a major factor distinguishing higher performers on our science questions from those who get fewer correct. Adults with postgraduate and college degrees performed better than those with a high school diploma or less. This pattern is consistent with a 2013 Pew Research report on this topic.

Science Knowledge Varies by Education, and Demographic Factors Pew Research’s findings are also consistent with analysis of the factual knowledge index in the National Science Board’s Science and Engineering Indicators. That research finds that higher educational levels are associated with more factual knowledge of science, and that college graduates who took at least three college-level courses in science or mathematics have higher levels of science knowledge than do those who took fewer science and math courses.3

The new Pew Research survey also finds gaps in science knowledge between men and women, with men outperforming women on many questions – even when comparing men and women with similar levels of education.

Questions on this survey deal primarily with topics tied to the physical sciences, rather than life sciences such as those related to health and medicine. Research by the federal government has found that gender differences in science knowledge tend to be larger on questions about the physical sciences than the life sciences.4

In previous Pew Research surveys that are also detailed in this report, there were no differences or only modest knowledge differences between men and women on four health and biomedical topics in the news. For instance, on one previous question, women were more likely than men to answer correctly that antibiotics do not kill both viruses and bacteria. At the same time, men were more likely than women to know that the main function of red blood cells is to carry oxygen throughout the body. The Science and Engineering Indicators report found no difference between adult men and women on factual knowledge of biomedical topics.5

Generally, younger adults display slightly higher overall knowledge of science than adults ages 65 and older on the 12 questions in the new Pew Research survey. On some questions, younger adults are particularly well-informed. For example, 80% of adults ages 18 to 29 correctly identify radio waves as the waves that are used to transmit cellphone calls, as do 77% of those ages 30 to 49; fewer adults (57%) ages 65 and older know this. On at least one question, however, adults ages 65 and older are more informed than younger adults: 86% of adults 65 and older correctly identify the developer of the polio vaccine as Jonas Salk, compared with 68% of those ages 18 to 29.

There are also differences associated with race and ethnicity in the new survey’s 12 questions. Whites are more likely than Hispanics or blacks to answer more of these questions correctly, on average; the mean number of items correct is 8.4 for whites, 7.1 for Hispanics and 5.9 for blacks. The pattern across these groups and the size of the differences vary, however.

The findings on race and ethnicity are broadly consistent with results on science knowledge questions in the General Social Surveys between 2006 and 2014. Pew Research analysis of the GSS data finds white adults scored an average of 6.1 out of 9 questions correctly, compared with 4.8 for Hispanics and 4.3 for blacks. While whites, blacks and Hispanics with higher education levels know more factual science items on average, mean differences by race and ethnicity occur among all education levels.6 As with gender differences, differences by race and ethnicity could tie to a number of factors, including differences in areas of study at the high school, college and postgraduate levels and other factors.7 To the extent that science knowledge, especially on issues in the news and emerging scientific developments, is learned in connection with adult life activities, the long-standing underrepresentation of blacks and Hispanics in the science, technology, engineering and mathematics workforce could also be a contributing factor.8

The questions in the new Pew Research Center survey represent only a small slice of science knowledge. Science encompasses a vast array of fields and information. Across the set of 12 science knowledge questions in this survey, it is clear that some information is widely known while other information is much less so. To allow for comparisons across a wider array of questions and topics, we include a series of tables in this report with findings from the new Pew Research survey and from previous Pew Research studies. Comparisons across surveys should be made cautiously. The new survey includes several questions with images or photographs displayed online or in a print questionnaire. Past surveys were, with one exception, conducted by telephone and thus relied solely on respondents’ aural and verbal skills. Little is known about how different modes of interview could influence the findings. Nonetheless, these comparisons help illustrate that the broad patterns of differences in science knowledge by education and demographic subgroups in this new survey are generally in keeping with previous Pew Research surveys that tapped public knowledge about science.

State of the News

State of the News Media

Eight years after the Great Recession sent the U.S. newspaper industry into a tailspin, the pressures facing America’s newsrooms have intensified to nothing less than a reorganization of the industry itself, one that impacts the experiences of even those news consumers unaware of the tectonic shifts taking place.

Fact Sheets:

PJ_15.04.28_SONM2015_landingIcons_7newspapers     PJ_15.04.28_SONM2015_landingIcons_3cableNews     PJ_15.04.28_SONM2015_landingIcons_1localTVNewsPJ_15.04.28_SONM2015_landingIcons_2networkNews     PJ_15.04.28_SONM2015_landingIcons_11digitalNewsAudience     PJ_15.04.28_SONM2015_landingIcons_10digitalNewsRevenuePJ_15.04.28_SONM2015_landingIcons_5podcasting     PJ_15.04.28_SONM2015_landingIcons_4audio     PJ_15.04.28_SONM2015_landingIcons_6publicBroadcastingPJ_15.04.28_SONM2015_landingIcons_9alternativeWeeklies     PJ_15.04.28_SONM2015_landingIcons_8newsMagazines      PJ_15.04.28_SONM2015_landingIcons_14HispanicMedia     PJ_15.04.28_SONM2015_landingIcons_13hispanicMedia     

 

In 2015, the newspaper sector had perhaps the worst year since the recession and its immediate aftermath. Average weekday newspaper circulation, print and digital combined, fell another 7% in 2015, the greatest decline since 2010. While digital circulation crept up slightly (2% for weekday), it accounts for only 22% of total circulation. And any digital subscription gains or traffic increases have still not translated into game-changing revenue solutions. In 2015, total advertising revenue among publicly traded companies declined nearly 8%, including losses not just in print, but digital as well.

Key annual audience trends 2015 vs. 2014The industry supports nearly 33,000 full-time newsroom employees. Indeed, newspapers employ 32% of daily reporters stationed in Washington, D.C. to cover issues and events tied to Congress, as well as 38% of the reporters who cover statehouse legislatures. Still, smaller budgets have continued to lead to smaller newsrooms: The latest newspaper newsroom employment figures (from 2014) show 10% declines, greater than in any year since 2009, leaving a workforce that is 20,000 positions smaller than 20 years prior. And the cuts keep coming: Already in 2016, at least 400 cuts, buyouts or layoffs have been announced. Ownership trends show further signs of devaluation as three newspaper companies – E.W. Scripps, Journal Communications and Gannett – are now one. And the recently renamed Tribune Publishing Co. spent much of the spring of 2016 fending off an attempt by Gannett to purchase them as well.

Print newspapers, to be sure, have a core audience and subscriber base that the industry hopes will buy enough time to help ease the digital transition. But recent data suggests the hourglass may be nearing empty: A January 2016 Pew Research Center survey found that just 5% of U.S. adults who had learned about the presidential election in the past week named print newspapers as their “most helpful” source – trailing nearly every other category by wide margins, including cable, local and national TV, radio, social media and news websites. (About one-third got at least some election news from a print paper, which again trailed nearly every other category.)

Key annual economic trends 2015 vs. 2014

The three television-based news sectors face serious challenges but have benefitted from the fact that despite all the growth in digital, including a surge in digital video developments over the last year, large swaths of the public – and thus advertisers – remain drawn to that square box in the middle of the room. Cable and network TV both saw revenue growth in 2015. Network TV grew ad revenues by 6% in the evening and 14% in the morning. Cable increased both ad revenue and subscriber revenue for a total growth of 10% and saw profit gains as well. Local ad revenue, which follows a cyclical pattern tied to election-year ad spending, was down compared with the election year of 2014 but on par with the last non-election year of 2013 and higher than the last presidential primary year (2011). Additionally, retransmission revenue is expected to reach $6.3 billion in 2015, five times that of 2010.

Despite current financial strength, though, TV-based news can’t ignore the public’s pull toward digital. The contentious presidential primary helped spur cable prime time viewership 8% above 2014 levels, but those audience gains followed a year of declines across the board in 2014. And, while network TV newscasts had a mixed year – morning news audience declined while evening remained about steady – local TV news lost audience in every major timeslot. More broadly, a 2015 Pew Research Center survey suggests that as many as one-in-seven Americans have turned away from cable or satellite TV subscriptions. This “cord cutting” has implications not just for cable but for any network or station that benefits from the pay TV system. This coincides with a growing digital video ad market, which has attracted the interest of publishers. The Center’s survey data reveal that dramatic generational differences already exist, with those under 30 much less likely than those 30+ to watch any of the three programming streams. Instead, younger adults are more likely to name social media as a main source of news. Even beyond the young, fully 62% of U.S. adults overall now get news on social media sites – many of which took steps over the last year to enhance their streaming video capabilities.

With audience challenges already in view and few immediate financial incentives to innovate, the dilemma facing the TV news business bears an eerie resemblance to the one faced by the newspaper industry a decade ago, except for the fact that the digital realm is much more developed and defined today.

It has been evident for several years that the financial realities of the web are not friendly to news entities, whether legacy or digital only. There is money being made on the web, just not by news organizations. Total digital ad spending grew another 20% in 2015 to about $60 billion, a higher growth rate than in 2013 and 2014. But journalism organizations have not been the primary beneficiaries. In fact, compared with a year ago, even more of the digital ad revenue pie – 65% – is swallowed up by just five tech companies. None of these are journalism organizations, though several – including Facebook, Google, Yahoo and Twitter – integrate news into their offerings. And while much of this concentration began when ad spending was mainly occurring on desktops platforms, it quickly took root in the rapidly growing mobile realm as well.

Increasingly, the data suggest that the impact these technology companies are having on the business of journalism goes far beyond the financial side, to the very core elements of the news industry itself. In the predigital era, journalism organizations largely controlled the news products and services from beginning to end, including original reporting; writing and production; packaging and delivery; audience experience; and editorial selection. Over time, technology companies like Facebook and Apple have become an integral, if not dominant player in most of these arenas, supplanting the choices and aims of news outlets with their own choices and goals.

The ties that now bind these tech companies to publishers began in many ways as lifelines for news organizations struggling to find their way in a new world. First tech companies created new pathways for distribution, in the form of search engines and email. The next industry overlap involved the financial model, with the creation of ad networks and app stores, followed by developments that impact audience engagement (Instant Articles, Apple News and Google’s AMP). Now, the recent accusations regarding Facebook editors’ possible involvement in “trending topics” selections have shined a spotlight on technology companies’ integral role in the editorial process. The accusations, whether true or not, highlighted the human element involved in any machine learning tool, not only Facebook’s. The messaging app Snapchat reports having about 75 editorial-level staff members and announced in mid-May that they will begin using an algorithm for news story selections.

Original reporting and writing are the two industry roles largely left to news organizations (though there are a handful that are using machines to produce news). None of the others carry much worth without these two key elements – so these roles are in some ways critical to tech companies. But it is also true – and some nonprofits have found this in their struggle to get audiences – that well-reported news stories are also not worth much without the power of strong distribution and curation channels. What is less clear is how the tug and pull between tech and journalism companies will evolve to support each other as necessary parts of the whole, and what this rebuilt industry will ultimately mean for the public’s ability to stay informed.

These are some of the findings of Pew Research Center’s 2016 State of the News Media report, now in its 13th year. This is the Center’s annual analysis of the state of the organizations that produce the news and make news available to the public day in and day out. Understanding the industry in turn allows researchers to ask and answer important questions about the relationship between information and democracy. Within this report we provide data on 13 separate segments of the news industry, each with its own data-filled fact sheet. Each individual fact sheet contains embeddable graphics that also link to a full database of roughly 80 charts and tables that pull from roughly 20 different sources. This overview highlights and weaves together audience, economic, newsroom investment and ownership trends across the industry.

Other news sectors than those talked about above had mixed years. In ethnic media, Hispanic print weeklies saw some circulation growth, but the major Hispanic dailies all declined and the largest TV network’s news programs lost both audience and revenue. The number of black newspapers remained at roughly 200, though there is evidence of further audience decline. In the digital space, The Root – a leading black-oriented news site – was acquired by Univision Communications in a bid to expand its audience. NPR erased its years-long operating deficit and expanded its digital offerings, including three new podcasts in 2015. The 14 news magazines studied here varied dramatically in their print and digital audience figures, though digital figures are harder than ever to gauge with the greater use of platforms such as Texture, which provide consumers with bundled access to multiple magazines. There is no audited, sector-wide audience or financial data for digital-native news outlets such as the Huffington Post and Vox, but what the Center is able to collect suggests growth in total audience and time spent on these websites. Beyond their home pages, these sites are also pouring efforts into social media, mobile apps and even giving a resurgence to email newsletters. Podcast programming and listenership grew again in 2015, though podcasts overall (beyond just news) still reach a minority of Americans (36%) and bring in a fraction of revenue compared with other news genres.

There were also, in the past year, some exciting developments and experiments in the original reporting and storytelling in the digital realm by those producing original reporting. Several news outlets including The New York Times and The Des Moines Register are experimenting with virtual reality journalism that can let consumers “experience” the news themselves; others like the Washington Post and Quartz have built “chatbots,” which (like Apple’s Siri or Microsoft’s Cortana) provide personalized, interactive headlines through texts or mobile messaging services like Facebook Messenger; ProPublica has delved into the big data space, including a deep examination of how criminal profile algorithms are biased; and Univision Digital launched Univision Beta, in collaboration with MIT – experimenting with new ways to tell stories, especially on social and messaging platforms such as their new hub for their online election reporting, Destino 2016.

But even for these, the lines of dependencies with technology companies are deep. As these lines continue to solidify it will be important to keep in mind that the result is about far more than who captures the upper hand or the revenue base. It is determining how and with what kinds of storytelling Americans learn about the issues and events facing society and the world.

Business Loans

Small business loans often come with a slew of additional fees. While they may vary based on the lender, the following are common ones you might encounter.

Financing is a crucial part of any successful business, providing much-needed capital for important investments like renovations, upgrades, expansions, and inventory. But it’s not without their downsides. Many business loans come with obscure or hidden fees, which you may not know about until it’s too late to turn back.

Let’s take a look at some of the lesser-known fees on small business loans. While they may vary based on the lender, the following are common ones you might encounter.

Origination fee

Origination fees are charged by lenders and brokers for processing your paperwork and getting the approval process on track. Generally, lenders will calculate this fee as a percentage of your total loan amount.

Unfortunately, such fees aren’t simply an extra cost to pay—they can also skew your APR, which can be particularly costly if you’ve taken out a short-term loan.

Underwriting fee

Simply put, underwriting is the process of assessing a client’s eligibility, creditworthiness, and overall risk level. A thorough, fair underwriting process will take a number of factors into consideration, including cash flow and revenue streams of a business, the value of collateral, credit scores, the borrower’s equity, and additional credit enhancements (like co-signing guarantees).

Underwriting fees account for all of this, and as mentioned before, vary based on lender. They’re typically charged as a percentage of the total loan amount.

SBA guarantee fees

The Small Business Administration (SBA) is an interesting case. As a government agency, it doesn’t directly loan out capital to entrepreneurs. Instead, it backs private capital with its own guarantee, which makes it less risky for private lenders to give entrepreneurs favorable loan terms. In other words, lenders won’t take as large a loss if borrowers default on repayment.

If you are taking out a loan from an SBA-approved lender, note that the SBA does charge a fee for this service, which is capped by law. Depending on your lender, this fee may (or may not) be passed on to you, the borrower. SBA guidelines state that loans under $150,000 will accrue no fees, whereas loans of $150,000 to $700,000 (with maturity of over one year) will accrue fees of 3%, with an additional 0.25 percent fee to be paid if a loan exceeds $1 million.

Bounce fees

As the name suggests, these unsuccessful payment fees are charged when your bank account has insufficient funds to pay back your loan (such as for a regularly scheduled payment). Generally, these fees are flat rates and not percentages.

Prepayment fee

The prepayment penalty clause usually stipulates that if the borrower pays back the remaining balance early, they’ll have to pay back a portion of the remaining interest, all the remaining interest, or a flat fee. So, you may think you’re saving money by paying back a loan early, but that’s not always true (when a prepayment penalty is present).

Why do these exist? Well, lenders make their money through interest. So if you pay your balance back six months early on a one-year loan term, your lender will lose out on six months’ worth of interest rates. This is a fee to look for before you sign onto a loan, even if you’re unsure whether or not you’ll pay back the loan early.

Remember: these five types of fees are very common, but they aren’t the only ones you’ll encounter. In fact, they’re just a small selection of the many fees that can come with commercial loans. Fees will vary based on loan type and lender, so it’s in your best interest to review your contract carefully.

Not everyone in advanced economies

Not everyone in advanced economies is using social media

(iStock Photo)
(iStock Photo)

Despite the seeming ubiquity of social media platforms like Facebook and Twitter, many in Europe, the U.S., Canada, Australia and Japan do not report regularly visiting social media sites. But majorities in all of the 14 countries surveyed say they at least use the internet.

Social media use is relatively common among people in Sweden, the Netherlands, Australia and the U.S. Around seven-in-ten report using social networking sites like Facebook and Twitter, but that still leaves a significant minority of the population in those countries (around 30%) who are non-users.

At the other end of the spectrum, in France, only 48% say they use social networking sites. That figure is even lower in Greece (46%), Japan (43%) and Germany (37%). In Germany, this means that more than half of internet users say they do not use social media. 

The differences in reported social media use across the 14 countries are due in part to whether people use the internet, since low rates of internet access limit the potential social media audience. While fewer than one-in-ten Dutch (5%), Swedes (7%) and Australians (7%) don’t access the internet or own a smartphone, that figure is 40% in Greece, 33% in Hungary and 29% in Italy.

However, internet access doesn’t guarantee social media use. In Germany, for example, 85% of adults are online, but less than half of this group report using Facebook, Twitter or Xing. A similar pattern is seen in some of the other developed economies polled, including Japan and France, where social media use is low relative to overall internet penetration.

Across the 14 countries surveyed, a median of 57% say they visit social media sites, such as Facebook, Twitter and other country-specific sites (for full list, see appendix). But social media use is not equally widespread within each country. Generally, those who are younger, more educated and richer tend to be more likely to report using social media, even among internet users.

The age gap on social media use between 18- to 34-year-olds and those ages 50 and older is significant in every country surveyed. For example, 88% of Polish millennials report using social networking sites, compared with only 17% of Poles ages 50 and older, a 71-percentage-point gap.

Note: See here for topline results of our survey, a list of smartphone and social networking examples used in each country, and methodology.

Online Shopping

Online Shopping and E-Commerce

New technologies are impacting a wide range of Americans’ commercial behaviors, from the way they evaluate products and services to the way they pay for the things they buy

(Photo by Erik Isakson)
(Photo by Erik Isakson)

Americans are incorporating a wide range of digital tools and platforms into their purchasing decisions and buying habits, according to a Pew Research Center survey of U.S. adults. The survey finds that roughly eight-in-ten Americans are now online shoppers: 79% have made an online purchase of any type, while 51% have bought something using a cellphone and 15% have made purchases by following a link from social media sites. When the Center first asked about online shopping in a June 2000 survey, just 22% of Americans had made a purchase online. In other words, today nearly as many Americans have made purchases directly through social media platforms as had engaged in any type of online purchasing behavior 16 years ago.

But even as a sizeable majority of Americans have joined the world of e-commerce, many still appreciate the benefits of brick-and-mortar stores. Overall, 64% of Americans indicate that, all things being equal, they prefer buying from physical stores to buying online. Of course, all things are often not equal – and a substantial share of the public says that price is often a far more important consideration than whether their purchases happen online or in physical stores. Fully 65% of Americans indicate that when they need to make purchases they typically compare the price they can get in stores with the price they can get online and choose whichever option is cheapest. Roughly one-in-five (21%) say they would buy from stores without checking prices online, while 14% would typically buy online without checking prices at physical locations first.

Although cost is often key, today’s consumers come to their purchasing decisions with a broad range of expectations on a number of different fronts. When buying something for the first time, more than eight-in-ten Americans say it is important to be able to compare prices from different sellers (86%), to be able to ask questions about what they are buying (84%), or to buy from sellers they are familiar with (84%). In addition, more than seven-in-ten think it is important to be able to try the product out in person (78%), to get advice from people they know (77%), or to be able to read reviews posted online by others who have purchased the item (74%). And nearly half of Americans (45%) have used cellphones while inside a physical store to look up online reviews of products they were interested in, or to try and find better prices online.

The survey also illustrates the extent to which Americans are turning toward the collective wisdom of online reviews and ratings when making purchasing decisions. Roughly eight-in-ten Americans (82%) say they consult online ratings and reviews when buying something for the first time. In fact, 40% of Americans (and roughly half of those under the age of 50) indicate that they nearly always turn to online reviews when buying something new. Moreover, nearly half of Americans feel that customer reviews help “a lot” to make consumers feel confident about their purchases (46%) and to make companies be accountable to their customers (45%).

But even as the public relies heavily on online reviews when making purchases, many Americans express concerns over whether or not these reviews can be trusted. Roughly half of those who read online reviews (51%) say that they generally paint an accurate picture of the products or businesses in question, but a similar share (48%) say it’s often hard to tell if online reviews are truthful and unbiased.

Finally, this survey documents a pronounced shift in how Americans engage with one of the oldest elements of the modern economy: physical currency. Today nearly one-quarter (24%) of Americans indicate that none of the purchases they make in a typical week involve cash. And an even larger share – 39% – indicates that they don’t really worry about having cash on hand, since there are so many other ways of paying for things these days. Nonwhites, low-income Americans and those 50 and older are especially likely to rely on cash as a payment method.

Among the other findings of this national survey of 4,787 U.S. adults conducted from Nov. 24 to Dec. 21, 2015:

  • 12% of Americans have paid for in-store purchases by swiping or scanning their cellphones at the register.
  • Awareness of the alternative currency bitcoin is quite high, as 48% of Americans have heard of bitcoins. However, just 1% of the public has actually used, collected or traded bitcoins.
  • 39% of Americans have shared their experiences or feelings about a commercial transaction on social media platforms.

Job Seeking is Going Mobile





Highlights from the report: Searching for Work in the Digital Era. Like many other aspects of life, job seeking is going mobile: 28% of Americans have used a smartphone as part of a job search, and half of these “smartphone job seekers” have used their smartphone to fill out a job application.

Today’s Businesswomen

SBA provides resources to help women entrepreneurs launch new businesses, grow their businesses and compete in the global marketplace. With our online resources, financing opportunities and Women’s Business Centers, we’re here to help you succeed.

 

Job Opportunities

Search for job opportunities across the United States on Simply Hired. Browse by job category, city, state, employer and more. Get a head start and post your resume.

10 Facts About American Workers

Businessman, builder, nurse, architect. Isolated
Businessman, builder, nurse, architect. Isolated READ MORE

NEW IN 2017 AMSNEWS.TV MAGAZINE

 

 

AMSNEWS.TV MAGAZINE is the fastest-growing digital MAGAZINE. AMSNEWS.TV MAGAZINE avid readers come here every day to read the free publications created by enthusiastic business professional from all over the globe, devoted to topics such as art, business news, fashion, film, food, technology, travel and more.


 

The 10 Highest-Paying Jobs

Don't have the time or money to get a bachelor's degree? Read More

HairStylesNews

Hairstyles, haircuts, hair care and hairstyling. Hair cutting and coloring techniques to create today's popular hairstyles. Read More