Media credibility is more important than ever. While the vast majority continue to do a terrific job, people trust reporting less and less because of disingenuous attacks on so-called “fake news.”

Similarly, the polling industry is facing a crisis of consumer skepticism. It’s more difficult than ever to separate quality pollsters from the glut of fly-by-night operations, and meticulous professionals are struggling to achieve representative samples in an era of disappearing landlines.

The most significant blow to pollsters was largely unfair and unlucky, based on a variety of misunderstandings about the 2016 presidential polls. (Most national polls were very accurate, yet some state polls weren’t – and the small discrepancies tipped the balance to Trump, creating an oversimplified conclusion that “all polling is broken.”)

Because of these dynamics, it’s more important than ever for media-sponsored polls to avoid making easy errors. We’ve seen how Trump and his acolytes exploit and extrapolate minor honest mistakes, falsely ascribe malignant motives, and claim that every story and every poll is therefore false. That’s why the LA Times / USC Dornsife poll is a mess – and a disservice to all the honest and talented Times reporters who are unfairly tainted by association.

Perhaps most egregious and intellectually dishonest is the Times’ decision to promote the poll based on its status as an outlier in the 2016 presidential election. Clearly, erroneously predicting that Trump would win the popular vote by a large margin is not somehow validated because he won the electoral college. As the Times and USC surely know, their 2016 polling was wildly off-base – more so than most polls – it just happened to be wrong in the direction of the candidate who won the election.

Bizarrely, their polls stay in the field for over a month – meaning their survey released this week includes opinions of voters from mid-September – before most campaigns were communicating with voters, and before many people had begun to focus on the election.

On ballot issues, rather than follow industry best practices and present voters with the full wording of initiatives as they appear on ballots, they instead tested language from the “title and summary” that only appears in voter pamphlets. They also arbitrarily removed the critical information about fiscal impacts that is printed in voter pamphlets. This fundamental error alone makes the results completely unreliable.

On Prop 6, the gas tax repeal, they tested the wrong title and summary, using an old version that has since been significantly revised. And of course the result is massively skewed by leaving out the fiscal fact that the initiative would strip $5 billion from bridge and road safety repairs. Similarly, testing Prop 10’s actual ballot language – not to mention its fiscal impact statement – reveals dramatically higher opposition.

The errors range from baffling to just plain sloppy. Without going down important rabbit-holes about turnout models, weighting samples, and self-reporting party identification, a few additional mistakes they’ve made this cycle:

– Referred to the U.S. Senate race as the “election for California State Senate.”

– Failed to include political party designations.

– Called Antonio Villaraigosa “former LA Mayor” (which isn’t allowed and therefore wasn’t his job description listed on ballots.)

– Similarly listed Delaine Eastin as the incumbent (not even former) Superintendent of Public Instruction, a position she hadn’t held in nearly 15 years.

– Misspelled John Chiang’s name.

That’s not to say the Times / USC results are always wrong – even a stopped clock is right twice a day, and some of their results mirror private data from the best pollsters (sorry John Cox). There is no shortage of better polls and better pollsters, and we’d all be better off if media outlets take the extra effort to assure they’re associated with the good ones.