Elehman839 

The Existential Risk of AI: A Real Threat, Not Sensationalism

Critical Analysis of AI Development and Its Societal Implications

Understanding the AI Risk Debate

This discussion continues the exploration of concerns about superintelligent AGI, questioning whether the fear is based on legitimate risks or a marketing campaign by companies seeking to replace workers with machines.

The conversation examines the real implications of companies attempting to automate human labor and the potential existential risks posed by advanced AI systems.

1. Critical Analysis of AI Risk Commentaries

  1. A problem with commentaries like this is that the authors (apparently) have not been involved in the development of any AI model. So their arguments are necessarily limited to interpreting public remarks by actual model developers, who are pretty secretive about their work these days. So they don't have much foundation for their arguments.
  2. Furthermore, there is a strong tendency in this particular article to self-citation; that is, justifying statements in paper N+1 by referencing papers N, N-1, N-2, etc. For example, the first author (Subbarao Kambhampati) appears in 12 of his own references. In other words, this author is largely engaged in a protracted conversation with himself.
  3. This can produce weird effects. For example, the only defense for the "just statistically generated" claim is a reference to another paper by the same author, which (strangely) makes no use of the word "statistics" at all.
  4. This doesn't mean that the paper is necessarily wrong, but the arguments are fluffy compared to a typical paper from leading AI research organizations.

Leading AI Research Organizations

DeepSeek

Advanced AI research company

OpenAI

AI research and deployment

DeepMind

AI research subsidiary of Alphabet

"The arguments are fluffy compared to a typical paper from DeepSeek, OpenAI, DeepMind, etc., written by people who actually do this technical work hands-on instead of watching from afar as others do the direct work."

2. Deep Learning: A Paradigm Shift in Knowledge Utilization

  1. A traditional approach to making knowledge useful involves two steps. First, humans study data, discover patterns, and make these observations into "rules of thumb", equations, laws of science, etc. Then engineers incorporate these principles into their machinery or computer programs or chemical processes or whatever. We've done this for centuries, millenia, and perhaps longer.
  2. The larger significance of deep learning is that it short-circuits this process; it allows us to go directly from raw data to useful mechanisms with no human understanding in the middle. We've now constructed masses of math and computation that automate this process end-to-end.
  3. In a sense, this is familiar. I bet you can recognize a picture of your mother, but you cannot describe how the masses of neurons in your brain that perform this feat actually work. What's new is that deep learning enables us to accomplish feats in silicon rather than with biological neurons.
  4. The history of AI research suggests that regarding understanding of human cognition as a precondition for creating AI was actually a fundamental barrier to progress. Human cognition is probably too complex for humans to understand in detail.
  5. So, coming back to your comment above, my personal view is that an understanding of human cognition is not a requirement for constructing superintelligent AGI.
  6. Only by shedding the requirement of "understandability" of intelligence do we open up the possibility of mimicry.

Learn how we helped 100 top brands gain success.

Let's have a chat