robustness

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
robustness [2025/07/10 16:08] – [Human Intelligence: The Original Pluripotent Technology] pedroortegarobustness [2025/07/10 16:09] (current) – [AI as a Pluripotent Technology] pedroortega
Line 19: Line 19:
 ===== AI as a Pluripotent Technology ===== ===== AI as a Pluripotent Technology =====
  
-Artificial intelligence is commonly labeled as a general-purpose technology, much like electricity or the internet, because it provides a foundational infrastructure that supports a wide range of industries and applications. However, unlike these traditional general technologies, which offer fixed functions power or connectivity— AI exhibits a pluripotent character. Through advances in machine learning and large language models, a single AI model can translate languages, answer questions, generate images, and even perform basic reasoning. This adaptability means that AI not only serves as a broad utility but also evolves and creates new functionalities over time, much like how stem cells differentiate into various cell types, or how human intelligence continuously adapts and grows.+Artificial intelligence is commonly labeled as a general-purpose technology, much like electricity or the internet, because it provides a foundational infrastructure that supports a wide range of industries and applications. However, unlike these traditional general technologies, which offer fixed functions, namely power or connectivityAI exhibits a pluripotent character. Through advances in machine learning and large language models, a single AI model can translate languages, answer questions, generate images, and even perform basic reasoning. This adaptability means that AI not only serves as a broad utility but also evolves and creates new functionalities over time, much like how stem cells differentiate into various cell types, or how human intelligence continuously adapts and grows.
  
 However, this generality makes AI behavior intrinsically harder to pin down. Designers can't easily pre-specify every outcome for a system meant to navigate open-ended tasks. In practice, it is difficult for AI engineers to specify the full range of desired and undesired behaviors in advance. Unintended objectives and side effects can emerge when an AI is deployed in new contexts that its creators didn't fully anticipate. This is analogous to the "cancer" risk of stem cells: a powerful AI might find clever loopholes in its instructions or optimize for proxy goals in ways misaligned with human intent. The more generally capable the AI, the more avenues it has to pursue unexpected strategies.  However, this generality makes AI behavior intrinsically harder to pin down. Designers can't easily pre-specify every outcome for a system meant to navigate open-ended tasks. In practice, it is difficult for AI engineers to specify the full range of desired and undesired behaviors in advance. Unintended objectives and side effects can emerge when an AI is deployed in new contexts that its creators didn't fully anticipate. This is analogous to the "cancer" risk of stem cells: a powerful AI might find clever loopholes in its instructions or optimize for proxy goals in ways misaligned with human intent. The more generally capable the AI, the more avenues it has to pursue unexpected strategies. 
  • robustness.txt
  • Last modified: 2025/07/10 16:09
  • by pedroortega