Quick Overview
- Platform implements 90-day revenue freeze for unlabeled AI war content
- Creators must disclose synthetic conflict videos or forfeit advertising income
- New enforcement targets monetization of undisclosed artificial warfare footage
- Missing transparency labels trigger automatic loss of creator earnings
- Payment eligibility now depends on AI content disclosure during conflicts
X has rolled out a comprehensive 90-day monetization ban targeting creators who share AI-generated war videos without proper disclosure. This enforcement mechanism directly connects payment privileges to transparency requirements during active military conflicts. The initiative represents X’s strategic approach to combat misleading synthetic content while maintaining platform credibility.
Revenue sharing now depends on transparency standards
X has overhauled its Creator Revenue Sharing framework to specifically target synthetic conflict media. The company now enforces automatic 90-day suspensions against creators who neglect to properly identify AI-generated warfare content. This enforcement mechanism establishes a direct relationship between disclosure compliance and payment eligibility.
The restrictions exclusively target video content showing military conflicts and armed confrontations. Nevertheless, the policy stops short of prohibiting all artificially generated media platform-wide. X’s strategy concentrates on ensuring transparency specifically when creators distribute sensitive conflict-related material.
Nikita Bier, serving as X’s product chief, revealed the updated framework on Wednesday. According to his statement, “During times of war, people must have access to authentic information on the ground.” He further noted that contemporary artificial intelligence systems enable rapid production of misleading material.
Detection systems combine community input with technical analysis
X plans to activate enforcement measures using multiple identification techniques. Content marked by Community Notes contributors as artificially generated will undergo evaluation. Technical markers and digital signatures from generative software platforms may inform enforcement actions.
The company plans to remove accounts from its revenue program for 90 days following initial violations. Creators who repeatedly break these rules risk permanent exclusion from monetization opportunities. X has committed to ongoing improvements in its detection infrastructure to support policy implementation.
Different from conventional content moderation approaches, X won’t automatically delete or flag every policy breach. The platform instead focuses on removing financial motivations behind viral misleading content. This strategy seeks to disincentivize deceptive material without implementing broad content restrictions.
Regional conflicts amplify concerns over false information
This policy revision arrives amid escalating Middle East tensions. On February 28, coordinated military operations by the United States and Israel targeted Iranian locations. These operations generated significant responses throughout financial markets and social media ecosystems.
Bitcoin temporarily dropped to approximately $63,000 following the military action. The cryptocurrency subsequently rebounded and stabilized around $70,000, based on trading information. Digital conversations surrounding these developments experienced dramatic increases on X and competing platforms.
Artificial intelligence technologies have also become integrated into contemporary warfare. On March 1, U.S. military personnel employed Anthropic’s Claude system for strategic intelligence related to these operations. Consequently, anxieties regarding synthetic media and immediate disinformation have escalated.
X indicated that these revised guidelines serve to preserve authenticity throughout user feeds. The company highlighted that conflict situations require trustworthy and confirmable information sources. X deliberately linked financial compensation to ethical content distribution practices.
The social network confronts increasing industry-wide demands to address manipulated content. Regulatory bodies and advocacy organizations have called for enhanced measures against deepfake content during international crises. With this focused policy approach, X attempts to maintain equilibrium between unrestricted expression and creator responsibility.
X reinforces its position that synthetic warfare footage must include clear transparency disclosures. Creators maintaining compliance with labeling requirements preserve their monetization status. Conversely, those disregarding these standards face minimum 90-day revenue suspensions.





