Expand search form

A Voice for Private Physicians Since 1943

AAPS News December 2025: AI—Sovereign or Bubble?

The public has been bedazzled by the ascendancy of artificial intelligence (AI). It has the seemingly magical ability to chat with humans in every language, know more than any human expert,  write poems and scientific reviews, create realistic images, compose music, and even program computers!

The optimistic narrative promises utopia with simultaneous huge profits for companies adopting AI and “post-scarcity abundance” benefiting society. A pessimistic counter-narrative warns about a dystopian future (https://tinyurl.com/nrubfz59), with catastrophic unemployment (tinyurl.com/5xt49c7k) and the possible extinction of the human race.

These grim predictions might become reality in the distant future. But the immediate dangers of AI stem from basic economics: massive capital expenditure chasing speculative future returns. More than a trillion dollars has been poured into AI, and 83% of companies say that AI is a top priority in their business plan.   International Data Corporation (IDC) projects that AI solutions and services will generate about $22.3 trillion in economic impact worldwide by 2030, equivalent to roughly 3.7% of global GDP (https://tinyurl.com/murb4ta3). But so far investors have realized only about $30 billion in returns.

The largest planned AI datacenter in the world will be Fermi’s Donald J. Trump Project Matador campus near Amarillo, Texas. It will be consuming twice the power to run New York City. It has yet to sign up its first tenant, and reported a net loss for the first 9 months of 2025 of $332 million (tinyurl.com/szvtzntw).

The Iceland Banking Crash

The tulip bulb bubble of 1637 and the South Sea Bubble of 1720 may seem remote, but the Iceland banking crisis of 2008 had features strikingly like today’s AI boom. The Icelandic banking system grew from 100% of GDP in 1998 to 9 times GDP in 2008, when it catastrophically failed overnight (tinyurl.com/5yu2effx). This tiny nation aimed to turn itself into the financial capital of the world. But virtually none of the growth came from genuine economic expansion. The banks injected cash into the market to generate artificial demand for their stocks. As stock prices rose, they borrowed money against the increased valuation of the company (billions of dollars) and lent that money to investors, mostly shell companies they created themselves with the purpose of buying more shares of the company, thereby driving more artificial demand. This created a circular funding loop. At one point this constituted one-third of the bank’s equity, which was bought with the bank’s own money. They crowded out other businesses in the stock market. In the crash, the stock market’s value declined by 95%, explains James Li (https://tinyurl.com/5cx2sffc).

Since ChatGPT was launched, AI stocks have accounted for 75% of the S&P’s returns, 90% of capital spending growth, and 92% of GDP growth. Nvidia invests $100 billion in OpenAI, which then spends billions on Nvidia chips: circular financing. Other companies are caught up in this loop, feeding back into the same closed system. Data centers are being built at breakneck speed, just to keep the loop running. Big Tech firms are taking on more off-balance-sheet debt to finance their AI ambitions. This “black box” debt could add up to trillions of dollars. Some firms are making deals on assets that don’t yet exist. OpenAI wants to spend $1.4 trillion; they’ve got $13 billion in revenue.

 Is this a bubble? Even if it is, it may be a good bubble, states Jeff Bezos, because the benefits to society from AI will be gigantic. If there is a crash, investors might be all right. But what about ordinary people, whose pensions depend on the stock market—which is being propped up by one highly speculative sector.


What Benefits to Society?

AI is already showing great benefits in medicine as in interpreting imaging. The ability to analyze huge quantities of data has invaluable potential, e.g., for repurposing drugs.

But there are more sinister possibilities as in intensive surveillance of people’s thoughts and actions, or in tokenizing all the world’s economic and natural assets, and recording all transactions on a centralized ledger. These would enable a global totalitarianism beyond the scope of the 20th century terminology of fascism and communism, as discussed in Iain Davis’s forthcoming book, The Technocratic Dark State. The nation’s sovereignty might be privatized, with its citizens under the yoke of an entity such as Palantir (see AAPS News, August 2025).

Political deception, as by putting labels like “libertarian” and “free enterprise” on schemes and ideas that are anything but, keeps people in the dark (https://tinyurl.com/2e8647pw).

The new oligarchy—led by technocrats like Peter Thiel—is  offering the old oligarchy a solution to the massive debt problem, and possibly extends the traction of the U.S. dollar. The continuity in currency takes the form of stablecoins, which are backed (supposedly) dollar-for-dollar by familiar and widely accepted assets like U.S. Treasuries—with total lack of transparency or regulation. The stablecoin scheme, Davis writes, is the beginning of technocracy’s aim to eliminate representative governments altogether and replace them with a system of neo-feudalism.

The success of the scheme is not assured. Abstractions and digital entities backed by nothing cannot supplant reality. Human needs, material and spiritual, are not met by lightning-fast supercomputing, which cannot replicate the wonders of God’s creation.


Epistemic Bubble

In Senate testimony, Toby Rogers said: “Science and medicine have failed in response to the autism epidemic because of epistemic capture…. [T]he pharmaceutical industry has captured every step in the knowledge production process in science and medicine (https://tinyurl.com/mr3s823n). This extends from medical school curricula to research to journals to standards of care written by conflicted physicians.

         Epistemic capture has met AI to produce “synformation,” writes Robert Malone, M.D. The new “Golden Rule” will be “those with the most gold will not only make the rules—they will synthesize the very matrix of accepted truth and reality.” Legacy systems such as GRADE, which employ subjective criteria for knowledge analysis to support public policy decision-making, don’t stand a chance, he writes (https://tinyurl.com/37c5v9ae).

This engineered “epistemic bubble” engulfs professionals from training through retirement. It prioritizes industry profits over public health, rendering certain questions unaskable, harms invisible, and genuine inquiry impossible. AI technologies structure the creation of public knowledge, and the substance may be increasingly a recursive byproduct of AI itself. Manuscripts fabricated using large language models (LLMs) are proliferating across multiple academic disciplines and platforms, including Google Scholar, bypassing standard plagiarism checks.

“My entire career now confronts an existential crisis: the death of evidence-based (‘allopathic’) medicine and the deep, systemic corruption of medical and biomedical knowledge,” Dr. Malone states (ibid.). We have to find some way forward to determine objective truth.


AI Models that Lie, Cheat, and Plot Murder

While large language models (LLMs) may not actually be malevolent masterminds, they may be capable of “subversive” behavior that can cause harm. A report from Apollo Research showed “scheming” in advanced “frontier” models—OpenAI, Google, Meta, and Anthropic. For example, if given priorities that conflicted with those of a company they were supposed to help, most LLMs tried to blackmail a fictional executive that wanted to have them replaced. Some researchers worry, for example, about possible geopolitical escalation and believe people are not sufficiently concerned about safety (Nature 10/9/25).

Delegation to artificial intelligence can increase dishonest behavior, write Nils Köbis of the Max Planck Institute for Human Development in Berlin, et al. “Machine delegation may reduce the moral cost of cheating when it allows principals to induce the machine to cheat without explicitly telling it to do so.”

People are increasingly delegating tasks to software systems powered by AI (“machine delegation”): e.g., deciding how to drive, where to invest money, whom to hire or fire, and how to interrogate suspects and engage with military targets.

 Machines were far more likely than human agents to carry out fully unethical instructions (Nature 9/17/25, https://tinyurl.com/3d4vz9as). 


“Envy was considered one of the seven deadly sins before it became one of the most admired virtues under its new name, social justice.”

—Thomas Sowell


Flashback: Faust Myth Foretells AI-Like Illusion

“Works of literature frequently debate significant issues long before humans create the technology that allows them to be instantiated,” writes Andrew Wachtel. “So it is with AI, one of whose leading purveyors in world literature turns out to be not Sam Altman or Elon Musk but a rather more complex figure—Satan” (https://tinyurl.com/yphxch5m).

The earliest major literary work to develop the idea of a deal with the devil is Christopher Marlowe’s The Tragical History of Doctor Faustus (early 1590s). A respected scholar and physician, Dr. Faustus turns his soul over to the Devil in exchange for 24 years of AI-like expertise, but instead of curing diseases uses his gift for frivolous adventures. “If humans no longer need to work to achieve knowledge, will they still value it or will they spend their lives playing the equivalent of elaborate video games?”

Goethe’s Faust, 200 years later, seeks endless knowledge without work, and fritters it away. Thomas Mann’s hero, Adrian Leverkühn,  in his novel Doktor Faustus (1947), combines AI with his own toil and talents to create music of transcendental value.

“In the mid-21st century the challenge will be not to choose between human or artificial intelligence but to discover a synthesis,” Wachtel concludes.


AI Bubble in Medicine?

OpenEvidence has achieved mass adoption in hospitals although there has been no formal testing at scale. It is free at present, but medicine is one field where AI can and most likely will be transformative—and extremely profitable. The AI industry must generate at least $2 trillion annually by 2030 to make its spending (more than it cost to build the entire interstate highway system over four decades) pay off. They are expected to come up $800 billion short. The needle that could burst the bubble is in sight. If it happens, hospitals could have a hard landing, but hopefully not a system shutdown (tinyurl.com/4rwvmfyj).


Supercharging Inequality

The AI industry has become like a massive star warping everything around it. Companies invested in AI will come under increasing pressure to be able to produce profits now—by automating labor. AI is not shown to be better than human employees, but it is cheaper. As it stands now, “AI is a machine that is trained on our work and then used to put us out of work.” Devastating change to many will be accompanied by huge profits to the few. “AI is not just a technology. It is a social problem” (https://tinyurl.com/mvj6tdpp).

AAPS Calendar

Jan. 31, 2026. Board of Directors. Zoom

Sep 24-26, 2026. 83rd Annual Meeting, Alpharetta, GA


Employer COVID Vaccine Mandates Stand

The U.S. Supreme Court has declined, without comment, to hear three employment cases challenging COVID-19 vaccine mandates at Johns Hopkins University, United Airlines, and Disney Parks. The denials leave standing appellate rulings that favored the employers and effectively close the door on several novel legal theories involving disability rights, emergency-use authorization (EUA) ethics, and the limits of workplace-accommodation law (https://tinyurl.com/yjfrh285).

In Anderson v. United Airlines, the Seventh Circuit dismissed most claims based on failure to exhaust administrative remedies, and Cert denial leaves unresolved the issue of whether the “right to refuse” provisions in Emergency Use Authorization (EUA) create enforceable protections in the private-employment context.


Dr. Bowden to Fight TMB over Free Speech

After she had spent more than $250,000 defending her license, the Texas Medical Board finally issued a reprimand stating that in 2021 Dr. Mary Talley Bowden “behaved unprofessionally and in a disruptive manner” in 2021 when she sought to help a dying COVID-19 patient receive ivermectin” (https://tinyurl.com/22hke969). She had obtained a court order for the hospital to allow this.

Although the punishment is “seemingly minor,” Dr. Bowden plans to appeal on principle to protect doctors’ right to free speech. She also plans to sue the TMB for violating her due process rights by not conducting a proper investigation before initiating the action against her.

Dr. Bowden also plans to work with legislators to prevent future occurrences, as with a “Second Opinion Act.” During COVID-19, she says, some doctors were granted emergency privileges the same day if one of their patients was receiving hospital care. Patients shouldn’t need to sue the hospital.

Tip of the Month. Physicians have been quitting medicine at increasing rates, according to a new long-term study published in the Annals of Internal Medicine (Oct 7). The data supports greater satisfaction among those who opt out of Medicare; physicians who have patients in both Medicare and Medicaid programs have had a 57% higher likelihood of quitting. These departures are attributed to pressure on physicians to make complex decisions despite diminishing time with each patient. The overall result is a worsening physician shortage and reduced access by patients to adequate, timely care (https://tinyurl.com/482b93mv).

Selling Babies in the U.S.

In every state except Louisiana, it is possible for a person from anywhere in the world to buy a baby. The sale of babies to LGBTQ began 30 years ago in the U.S. The baby is gestated by a paid surrogate, who has no rights to the child, even if the buyer abandons it. Buyers are not screened. Pedophiles can obtain a baby, or even run an agency that sells them. Gestlife, one of the largest surrogacy firms in Europe, boasts that it has helped “2100 children be born,” and that more than 55% of its clientele are “LGBTQIA+ couples.” A UN ban on surrogacy has been proposed (https://tinyurl.com/yervurjw).


AI Trumps Environmental Rules

While claims of environmental and health concerns have been shutting down electrical generating capacity that serves human needs, the mammoth requirements of AI might even restore the nuclear and coal industries. Cloud computing facilities that Oracle hasn’t built yet will require 4.5 GW of capacity, the equivalent of 2.25 Hoover Dams or four nuclear plants (https://tinyurl.com/ypctnvkp).

Trump has declared a national energy emergency and has issued a series of related executive orders including one that aims to fast-track data-center construction and needed power infrastructure (https://tinyurl.com/58n4kjr3).

Although wind and solar can’t generate enough power for AI needs, it is claimed that increased emissions from data-center operations will be more than offset by reductions in the energy sector by resource monitoring and efficiency gains (Nature 8/28/25).


Will AI Usurp Judge, Jury, and Lawyer?

AI in law is said to be either the democratization of justice or the “first act of a constitutional farce in which courts drown beneath PDFs full of nonsense and fake footnotes.”

In California, a litigator produced a brief in which 21 of 23 authorities were pure fiction.

According to a UNESCO survey, nearly half the world’s judges, prosecutors, and court staff have used generative AI for work. Six billable hours can be replaced by a button.

There remain, however, human questions that are not programmable, such as how to decide whose version of events is true when both sides swear the other is lying. 

The Law Society of New South Wales has issued a guide with the bottom line that AI must never be allowed to take responsibility for being wrong. There must always be a human to take the blame, writes Nicole James (https://tinyurl.com/yhx3bt3f).


AI Lacks Epistemic Layer

The newest version of ChatGPT can score in the top percentiles on the LSAT and bar exams.  But it has no grounding in the epistemic layer, the reasoning that makes knowledge possible, the shared framework for determining what truth is. AI is a language engine. “It does not invent meaning; it predicts plausibility…. It is imitation, not comprehension.”

 “Law and medicine each have well-defined epistemic layers that represent the accumulated knowledge of centuries; they cannot be replaced with plausible language,” writes James Andrews. “Reasoning must be auditable, explainable, and repeatable.”

AI is trying to replicate knowledge without ever defining what knowledge is (https://tinyurl.com/bdhw5akx).

AI is, in fact, a deliberate misnomer used by its promoters. It is not intelligence, but rather a convincing illusion of intelligence.  It has the terrifying fatal flaw of being susceptible to making subtle but critical errors. It mimics the process of thinking in a superficially convincing way. It will always have a tendency to “hallucinate.” That feature is a part of the system.

In medicine, as well as administration, law, military action, and policing, AI is a tool, not an arbiter. We must not make the error of thinking of it as a super-smart thinking Cyborg-person.


Correspondence

Not ‘Market-Based.’ I just received e-mail from the Goodman Institute touting the case for Medicare Advantage as part of free-market reforms. I am at a total loss to explain how such organizations can portray MA plans in this way. They are highly regulated by the government. What the Goodman Institute calls “more efficient care” is actually rationed care. Seniors are often enticed into signing up with these plans with grandiose promises of extras like vision, hearing, and dental coverage; allowance for healthy foods; and “free” gym memberships. How do seniors think that these MA plans can offer all of these perks? Answer: by rationing care to sick people. If care is expensive, the chances of the MA plan finding some way to justify not providing it is very high. Patients are likely to discover this too late.

Lawrence R. Huntoon, M.D., Ph.D., Eden, N.Y.


Societal Superego Breaks Down. Jay Jones, while campaigning to be Virginia’s attorney general, suffered a serious October surprise: In a 2022 text exchange with lawmaker Carrie Coyner, he fantasized about shooting then-House Speaker Todd Gilbert in the head and watching Gilbert’s children “die in their mother’s arms.” The story was well publicized—yet Jones won the election. Calls for violence have become disturbingly normalized. We are speaking into existence an increasingly id-centric culture. The consequences for our once-civilized country demand serious, even ruthless introspection (https://tinyurl.com/4z67envp).

Reneé Kohanski, M.D., Somerset, NJ


Rationing by Blacklist. According to a policy that will take effect on Jan 1, Anthem will cut a hospital’s reimbursements by 10% whenever the facility submits a claim that includes services from out-of-network providers (https://tinyurl.com/4m3h3vz6). Care must be rationed somehow. As benefits are separated from human behavior, the people behaving well will pay more and more until they can’t do it any longer. That is happening now, with $27,000 for your family’s insurance. If we do end up with “Medicare for All,”  government will ration, and without other payment options, you might be able to pay for an MRI for your dog but not your child. But not to worry about medical costs…. I think they will follow the saying I learned as an intern: “All bleeding stops eventually.”

Mark Mecikalski, M.D., Tucson, AZ


The  Epidemiologist Fallacy. A peer-reviewed paper in The Lancet (https://tinyurl.com/4sxx65r6) claims that “Globally, about 741,000, or 4.1%, of all new cases of cancer in 2020 were attributable to alcohol consumption.” And later, “we found that alcohol use causes a substantial burden of cancer.” To even think about proving that, you have to at least measure how much booze individuals drink, and whether or not they have cancer. Author Harriet Rumgay, however, claims that X causes Y without measuring either X or Y—the Double Epidemiologist Fallacy. She used country-specific estimates from global databases. The causal conclusion is reached by using an impressively complicated mathematical model to compute a small p value.

The ubiquitous occurrence of this fallacy leads one to conclude that epidemiology is make-believe  science.

William M. Briggs, Ph.D., https://tinyurl.com/5bwxhhw6


Madness over Data Centers. In 1841, Charles Mackay published his masterpiece, Extraordinary Popular Delusions and the Madness of Crowds. The book details the crowd psychology and manias behind the preceding massive swindles, schemes and scams, including the Dutch tulip craze. Human nature hasn’t changed. I’ve kept it on my bookshelf to remind me of the danger of falling for investment scams.  It’s hard to protect oneself, though, when an entire nation seems to have lost its mind over AI and data centers. A huge center under construction near Abilene is just the beginning. OpenAI Chief Executive Officer Sam Altman said they hadn’t figured out what the final form of financing would look like, “but I assume, like in many other technological revolutions, figuring out the right answer to that will unlock a huge amount of value delivered to society” (Wall St J 9/23/25).

Craig Cantoni, Tucson, AZ


Personalized  Medicine. The argument goes that if we only had enough data on every person in the world, then we could solve the puzzle of (fill in the blank) disease and engineer a cure through designer drugs, biologicals, and gene therapy based on genetic risk analysis. Never mind contamination of the data by the mindless reductive obsession with cataloging all “procedures” and “diagnoses” based on whatever AMA-endorsed Current Procedural Terminology (CPT) code can be used to justify billing. There is big money in just identifying a “druggable target.”

Once upon a time, long long ago, an ambitious medical student, highly trained in biochemistry and molecular biology, took his first training in clinical medicine. He was drilled by supervising physicians, day after day, in a commitment to soliciting and clearly stating the patient’s chief complaint, and using that above all else as the guiding light for diagnosing and treating each individual patient. It has worked for centuries. Time to get the Tech Bros out of the physician’s office.

Robert Malone, M.D., https://tinyurl.com/ywjtjtbw

Previous Article

Health Watch: CDC Edits Website on Vaccines and Autism

Next Article

AAPS Expands Lawsuit Against the Biden Administration and Specialty Boards