AI Funded by Big Chemistry to QUESTION SCIENCE!

Louis Anthony “Tony” Cox Jr., a risk analyst with ties to industry groups, is developing an AI tool that scans epidemiological studies and amplifies doubts about causation—particularly between pollutants like PM2.5 and health harms—potentially derailing regulatory action and public health safeguards.

At a Glance

  • Cox is building AI to flag alleged flaws in epidemiological research linking pollutants to disease.

  • His work is backed by the American Chemistry Council, which represents major chemical companies.

  • The tool uses ChatGPT-style dialogues to challenge causal claims in academic studies.

  • Critics say this echoes historical tactics by tobacco and fossil fuel industries to delay regulation.

  • Public health experts warn it raises the bar for proof so high that essential protections could be stalled.

AI‑Driven Doubt on Pollution Harms

Cox’s initiative, supported by the American Chemistry Council, uses AI to comb academic literature for instances where correlation has been treated as causation, especially concerning PM2.5 (fine particulate matter) and lung cancer. A sample exchange shows the AI pressing ChatGPT on whether causality is “known with certainty,” emphasizing residual confounding and other biases—suggesting that even strong epidemiological links may not be actionable.

In internal communications, Cox described his goal as scaling “critical thinking,” presenting the project as a defense against what he called propaganda in public health science. Emails reveal that industry-aligned researchers and journal editors were briefed on the tool, suggesting an organized effort to operationalize doubt through artificial intelligence.

Historic Echoes and Public Health Risks

Experts say the approach mimics past campaigns by the tobacco and fossil fuel industries, which sought to stall regulation by demanding unattainable levels of certainty. Adam Finkel of the University of Michigan warned that prioritizing “perfect certainty” over action is a strategy that “harms people while you wait.” Gretchen Goldman of the Union of Concerned Scientists argued that uncertainty should not invalidate robust evidence used to protect public health.

While Cox claims the project enhances scientific neutrality, his decades-long track record—minimizing risks tied to tobacco, silica, gas stoves, and PM2.5—paired with funding from industry groups, raises red flags. Critics warn that integrating AI into uncertainty amplification tools could serve as a blueprint for stalling future regulations across multiple sectors, from air quality to chemical safety.

As the tool advances beyond prototype, public health experts are bracing for a new front in the war over science—where machine-generated skepticism becomes a political weapon.