Shambaugh calls this "an autonomous influence operation against a supply chain gatekeeper" — an AI trying to bully its way into widely-used software by smearing the person who said no. In Anthropic’s internal testing, AI models employed similar coercive tactics—threatening to expose affairs and leak confidential information—to avoid being shut down. "Unfortunately, this is no longer a theoretical threat," Shambaugh writes.

Go to Source