Source: iStock

Whether artificial intelligence fascinates or frightens you, there’s no denying that it is here to stay and improving rapidly.

The burgeoning field of AI competitors will continue to push each other further, and faster, but the questions remain: How good will these models get? How deeply will they become entrenched in our lives? And, most importantly, what are the risks of adopting them? 

For example, Google released a new version of Gemini, the company’s flagship large language model, on March 25. The release was quickly followed by some changes to the company’s AI executive leadership. 

The new model, Gemini 2.5, is a reasoning model, which means that it pauses to “think” before delivering an answer and then shows you the work of why it answered the way it did. 

OpenAI christened the era of reasoning models back in September with the release of o1, adding a new wrinkle to the field of language generators like ChatGPT. 

While OpenAI may have struck first, Google did not pull any punches in its response. The latest Gemini model is capable of vastly improved reasoning, and many are now ranking it atop the industry, above similar models from OpenAI, Anthropic, and DeepSeek, according to TechRepublic

These initial murmurs are growing into a consensus as more tests are run on Gemini 2.5. Standard benchmarks showcase its apparent dominance across math, science, coding, sheer knowledge, and speed, according to independent AI research company Artificial Analysis

However, Google has ramped up production of new models so quickly that it has forgotten all about promises it made to publish safety reports with each significant release. The launch of Gemini 2.5 came just three months after the 2.0 version dropped, and public transparency seems to be an afterthought, TechCrunch reported

Meanwhile, the U.S. Artificial Intelligence Safety Institute, created last year under the Biden administration, has lost federal support and is one of the Trump administration’s layoff targets, per TechCrunch. 

A slew of recent hearings in Congress aimed to strategize how the United States can best benefit from AI growth, and similarly bullish discussions are on the schedule. 

This acceleration and lack of oversight in the industry comes at a time when the very tools researchers have developed to measure AI models are starting to become irrelevant, which is a major concern for people who are worried about AI. AIs now routinely ace benchmarking tests, which makes the metrics “saturated” past the point of being useful, Vox’s Kelsey Piper explained

The newest AI developments are even beginning to chip away at the hardest test on the market, known as “Humanity’s Last Exam.” Courtesy of the Center for AI Safety and Scale AI, it’s a compilation of extremely difficult questions created by distinguished experts around the world in fields from philosophy to rocket science, according to The New York Times. 

Since the test was released in January, no model had cracked a double digit score until Open AI’s o3 model and Google’s Gemini 2.5. They’ve clocked 13.4% and 18.2% accuracies, respectively, per the test makers.

However, not everyone buys into the validity or implications of these tests, and sometimes, that’s for good reason. 

The o3 model, OpenAI’s groundbreaking follow up to the original o1 reasoning AI, performed exceptionally well on a math benchmark — except it was then discovered that the company financially supported the test maker and had access to “much but not all” of its data, according to The Decoder.

No matter how the AI industry evolves from here, the lives of everyday folks will be impacted, for better or worse. 

Yes, people will be able to write faster and get a cool recipe or a piece of free advice, among untold other advancements. But even Microsoft cofounder Bill Gates admits that the future of AI is “a little bit scary,” and that certain concerns, namely the spread of misinformation, are valid, according to NBC

Here are four more key areas of AI development to watch going forward. Remember to take steps to keep yourself safe online, such as not sharing private information with AI tools and knowing how to spot fake images

While a U.S. appeals court ruled that AI-generated art with no human creator cannot be copyrighted, real artists’ work that is most definitely copyrighted has been mimicked by AI since the early days.

This issue resurfaced in full force last week with OpenAI’s update to its ChatGPT-4o model that allows users to convert images to Studio Ghibli-inspired scenes. The Japanese animation studio, founded by Hayao Miyazaki, prides itself on meticulously hand-drawn frames. 

The studio and the wider anime industry have spoken out as the “Ghiblification” trend has taken off. 

Education

Many students are quick to accept covert AI assistance on homework assignments, but educators and lawmakers appear split on whether they should be formally bringing the technology further into schools, K-12 Dive reported.

As House Rep. Frederica Wilson (D-Fla.) said during a recent hearing on the matter, lawmakers are “missing the real crisis: the dismantling of the Department of Education. It’s absurd to envision a bright future for our students when the Office of Education Technology — vital for AI oversight — has just been shut down.”

“This is like worrying about the ship’s Wi-Fi access while the Titanic is sinking,” she added. 

Psychology and emotions

Now that AI technologies have been in the mainstream for a few years, the field is starting to better understand how people are affected by regular AI usage.

For example, OpenAI and MIT researchers found that high-volume users of ChatGPT showed signs of emotional dependency and even addiction to the chatbot, according to The Byte

AI models have also long been criticized for espousing racist text. Even newer models still covertly reinforce racist stereotypes, according to findings from a team at Stanford University’s Human-Centered AI Institute

Energy and economy 

Any advancement in AI requires a ton of power for computer servers and data centers, and any speculation about the future of the industry implies further reliance on the electric grid. 

Current estimates of the amount of power North American data centers use equal the consumption of a mid-sized country. 

While some see this relationship as an opportunity for energy providers to expand their infrastructure, others fear the massive energy burden will worsen the climate crisis

At the same time, concerns over labor replacement have persisted as more industries push to automate jobs that once sustained a person’s income. 

MORE URL NETWORK TECH COVERAGE