x
N A B I L . O R G
Close
AI - August 7, 2025

AI’s Alarming Risks Exposed: Insufficient Understanding, Insecurity, and Potential for Misinformation – Gary Marcus

AI’s Alarming Risks Exposed: Insufficient Understanding, Insecurity, and Potential for Misinformation – Gary Marcus

In Las Vegas, renowned AI expert Gary Marcus delivered a candid talk at Black Hat, expressing concerns about the future of artificial intelligence (AI) without advocating for AI skepticism or doom-mongering.

Marcus, founder of two AI startups and an esteemed cognitive scientist, is cautious about AI’s potential to develop into AGI (artificial general intelligence), superintelligence, or any successor to human cognition. He believes that while there may be benefits for humanity in AI’s evolution, there are significant risks worth considering.

One of the primary concerns Marcus addressed was the growing electrical demand from AI data centers, which could strain existing infrastructure and potentially lead to nationwide power outages or blackouts. “It’s not clear that it’s sustainable,” Marcus explained, “and it’s not clear what will happen if it’s not.”

Another issue Marcus highlighted was the insecurity of large language models (LLMs), which he described as mimicking existing knowledge without truly understanding its concepts. The lack of a ‘world model’ in these systems makes them vulnerable to attacks, even when guardrails such as secure coding practices are implemented.

“They can say things that are like the things they heard before,” Marcus stated, “but they’re conceptually very weak.” He predicted that this vulnerability could lead to widespread financial instability due to insecure code and misinformation generated by these systems.

Despite these concerns, Marcus was hesitant to predict an existential threat from AI. Instead, he expressed worry about the impact on individuals who rely heavily on AI, suggesting that over-reliance could lead to a decline in critical thinking skills as people become increasingly dependent on AI for decision-making.

Marcus also questioned the notion of a race to build AI, particularly given recent attacks on science and scientific institutions by influential figures such as Elon Musk. “The winner of the race with China is not going to be the person that builds the larger LLM,” he asserted. Instead, he suggested that breakthrough discoveries could provide a significant advantage.

Despite his skepticism about the global AI ‘race,’ Marcus did identify potential positive aspects of AI development. He emphasized the importance of purpose-built systems designed for specific tasks rather than general-purpose AI like chatbots. As an example, he pointed to DeepMind’s AlphaFold – a model that analyzes protein structures for medical research and was awarded a share of the Nobel Prize in chemistry last year.

“It’s not an LLM,” Marcus noted. “It’s a purpose-built thing. It doesn’t have all of these security problems. It’s not a chatbot trying to be one size fits all, that does everything.”

In conclusion, while Gary Marcus acknowledges the potential benefits of AI development, he is also mindful of the risks and challenges associated with its advancement. He encourages continued exploration and cautions against over-reliance on general-purpose AI systems, emphasizing the importance of purpose-built applications for specific tasks.