From the Internet  - identity untraceable.

 

  1. I get downvoted heavily for "not understanding how it works."

  2. The most recent example was a bunch of people taking issue with me pointing out that LLMs store data in memory, because they apparently did not store data, just "statistical relationships between how words are connected."

  3. Which is, of course, data stored in memory. The means by which it is doing so is novel, but the machine could not reproduce information is that information was not stored. Storing in a Neural Net is still storing it, even if it does not look like traditional data storage.

  4. If you bring up any limitations or flaws in the technology, despite them being objective and obvious, people just claim you don't understand it and that their sci-fi interpretation of it is the real "understanding it."

  5.  
  6. It is at the point where people legitimately have started to think that LLMs  [via 'Overview']  are actually conscious, and their evidence for that assertion is that I cannot prove humans are conscious.

  7.  
  8. Which is technically untrue, because every person can technically prove that at least one human is conscious, but also is just an admission that there is no real evidence that the machines are conscious, as they have to fall back on trying to get me to prove a negative.

  9. So everyone is getting all terrified of super intelligent AGI* based entirely on a hype marketing campaign by a bunch of companies that are trying to replace workers with machines, instead of getting terrified of what it means that companies are trying to replace workers with machines.  This contribution has been taken down. The page carries similar results.

  10. * AGI, or Artificial General Intelligence, refers to a hypothetical intelligence capable of performing any intellectual task that a human can. It's a type of AI that aims to mimic human cognitive abilities, including the ability to learn, reason, and adapt to new situations.