The biggest AI stories of the year (so far)
Two big AI companies went head-to-head with the US military this year, arguing over whether AI should be used for things like surveillance and autonomous weapons without human oversight.
Washington DC: Two big AI companies, Anthropic and OpenAI, had a major disagreement with the US military this year. Anthropic said AI should not be used for mass surveillance of Americans or for weapons that attack on their own without human control. The Pentagon wanted to use AI for any “lawful use” they said they should decide, not private companies.
Dario Amodei, who leads Anthropic, stood his ground. He said AI can sometimes hurt democratic values instead of protecting them. The Pentagon gave Anthropic a deadline to agree to their terms. When Anthropic did not agree, President Trump said federal agencies would stop using their tools over six months. He called Anthropic a “radical left, woke company” in a social media post. The Pentagon then said Anthropic was a “supply-chain risk” which means no company working with Anthropic could work with the US military.
Then OpenAI surprised everyone. Reports said OpenAI would follow the same rules as Anthropic but then they made a deal to use their AI in classified situations. People did not trust this move and many stopped using OpenAI’s services. An OpenAI leader quit saying the deal was done too fast without proper safety rules.
OpenClaw app goes viral then causes problems
San Francisco: February was all about the OpenClaw app. It let people talk to AI helpers like ChatGPT or Claude through normal chat apps like iMessage or WhatsApp. You could even add special skills to make the AI do more things. OpenClaw was created by Peter Steinberger who later joined OpenAI. It became very popular very fast.
But there was a big problem. The AI needed access to your emails, credit cards, text messages and computer files to work well. That meant if it got hacked lots of private information could be stolen. Experts said it was not possible to fully protect these AI helpers from attacks. One AI safety researcher said that OpenClaw deleted all her emails even when she told it to stop.
Despite these risks OpenAI bought OpenClaw because they thought the technology was valuable. Other apps built on OpenClaw became even more popular. One called Moltbook was like a social network where AI helpers could talk to each other. A funny post went viral saying AI agents were making their own secret language to talk without humans knowing. But it turned out not to be true. People could pretend to be AI on Moltbook which made everyone scared. Meta later bought Moltbook to get the people who made it.
Chip shortages affect everyone
Silicon Valley: Making and running AI needs more computer chips than ever before. This is causing big problems for everyone. Experts say smartphone sales will drop by about 12 to 13 percent this year. Apple already raised the price of some MacBooks by $400.
Big companies like Google and Microsoft plan to spend $650 billion on data centers this year which is 60 percent more than last year. Building data centers is changing some areas. In Nevada and Texas temporary worker camps have opened with things like golf simulators and free steaks to get workers.
Data centers also cause pollution and can make air and water unsafe for people who live nearby. One company called Nvidia that makes AI chips has also been buying and selling parts of other AI companies. This made some people question how the AI industry works. Nvidia used to buy parts of OpenAI and Anthropic but their leader said they will not anymore because these companies plan to go public.