Why AI must be understood as systems, not just models


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 24-03-2026 16:12 IST | Created: 24-03-2026 16:12 IST
Why AI must be understood as systems, not just models
Representative image. Credit: ChatGPT

New research suggests that modern AI systems, especially large language models, cannot be understood in isolation but must be viewed as products of broader computational environments that shape their behavior, outputs, and meaning. The study argues that the concept of the “model” itself is becoming increasingly inseparable from the systems in which it operates.

Published in AI & Society, the study titled “Systems Programming the Model” examines how generative AI models are constructed, deployed, and interpreted within complex technological infrastructures. The research presents a systems-level perspective, showing that what is commonly referred to as a model is not a static artifact but an emergent entity shaped by interactions between training processes, prompting mechanisms, and execution environments.

AI models emerge from systems, not isolation

The study challenges the traditional view of AI models as discrete units that can be analyzed independently of their deployment context. Instead, it argues that models are fundamentally shaped by the systems in which they are embedded. These systems include not only the trained neural networks themselves but also the surrounding infrastructure, such as prompting frameworks, sampling strategies, and execution pipelines.

According to the research, the behavior of a language model cannot be fully explained by its training data or architecture alone. Instead, it emerges from the interaction between multiple components that operate together during runtime. These interactions determine how inputs are processed, how outputs are generated, and how the model adapts to different contexts.

This perspective marks a shift from a model-centric view of AI to a systems-oriented approach. In traditional machine learning, models were often treated as standalone objects that could be evaluated based on performance metrics. However, the study suggests that this approach is no longer sufficient in the context of modern generative AI.

The rise of complex AI systems has introduced new layers of abstraction that influence model behavior. These include orchestration frameworks that coordinate multiple models, feedback mechanisms that refine outputs, and user interfaces that shape how models are accessed and interpreted. Together, these elements form a system in which the model is only one component among many.

This shift has practical implications for how AI systems are designed and evaluated. Developers must consider not only the performance of individual models but also how they interact within larger systems. This includes understanding how different components influence each other and how changes in one part of the system can affect overall behavior.

Programming replaces prompting as core paradigm

The role of prompting in AI is evolving into something closer to programming. While early discussions of generative AI focused heavily on prompt engineering, the research suggests that this concept is giving way to more structured and systematic approaches to controlling model behavior.

Prompting is no longer limited to simple input-output interactions. Instead, it is increasingly integrated into complex workflows that involve multiple steps, conditional logic, and iterative processes. In this context, prompts function more like components of a program than standalone instructions.

The study describes how prompting and programming are converging, creating a new paradigm in which language models are effectively programmed through structured interactions. This includes the use of chained prompts, dynamic input generation, and feedback loops that guide model behavior over time.

This convergence reflects broader changes in how AI systems are being used. As organizations deploy models in more complex applications, such as automated decision-making and multi-agent systems, the need for precise control and coordination has increased. Simple prompts are no longer sufficient to manage these systems, leading to the development of more advanced programming techniques.

The research also highlights the role of critical code studies in understanding these developments. By examining the code and structures that underpin AI systems, researchers can gain insights into how models are shaped and how they function within larger environments. This approach emphasizes the importance of analyzing not just the outputs of AI systems but also the processes that generate them.

The shift from prompting to programming has significant implications for both developers and users. It requires new skills and tools for designing and managing AI systems, as well as new frameworks for understanding their behavior. At the same time, it raises questions about transparency and accountability, as the complexity of these systems can make them more difficult to interpret.

Feedback loops and cascading systems redefine AI behavior

The study identifies feedback loops as a key feature of modern AI systems. These loops occur when models are used to generate outputs that are then fed back into the system, influencing future behavior. Over time, this process can lead to the emergence of new patterns and dynamics that are not present in the original model.

Feedback loops are particularly important in systems where multiple models interact with each other. In these environments, outputs from one model can serve as inputs for another, creating chains of interaction that amplify certain behaviors and suppress others. This process can result in the formation of what the study describes as “models of models,” where systems increasingly abstract and build upon their own outputs.

The rise of cascading AI systems represents a significant departure from earlier approaches to machine learning. Instead of relying on a single model to perform a specific task, modern systems often involve multiple models working together in coordinated ways. This includes scenarios where different models specialize in different tasks, such as data processing, decision-making, and output generation.

These cascading systems are becoming more common as organizations seek to build more sophisticated AI applications. By combining multiple models, developers can create systems that are more flexible, scalable, and capable of handling complex tasks. However, this approach also introduces new challenges, particularly in terms of managing interactions and ensuring consistent behavior.

Understanding these systems requires a new level of abstraction. Traditional methods of analyzing individual models are not sufficient to capture the dynamics of multi-model systems. Instead, researchers and practitioners must adopt a systems-level perspective that considers how different components interact and how these interactions shape overall behavior.

This shift has important implications for AI governance and regulation. As systems become more complex, it becomes more difficult to identify where responsibility lies and how decisions are made. The study suggests that addressing these challenges will require new frameworks that account for the distributed and emergent nature of AI systems.

Toward a systems-level understanding of artificial intelligence

The future of AI lies in understanding systems rather than individual models. As generative AI continues to evolve, the distinction between models and systems is likely to become increasingly blurred. This calls for a new approach to AI research and development that focuses on interactions, processes, and infrastructures.

A systems-level perspective provides a more comprehensive understanding of how AI works in practice. It allows researchers to identify the factors that influence model behavior and to design systems that are more robust and reliable. It also highlights the importance of considering the broader context in which AI operates, including technical, social, and organizational factors.

For developers, this approach offers new opportunities to create more sophisticated and effective AI systems. By leveraging interactions between models and components, it is possible to build systems that are greater than the sum of their parts. At the same time, it requires careful design and management to ensure that these systems function as intended.

For policymakers, the findings reinforce the need to rethink how AI is regulated. Traditional approaches that focus on individual models may not be sufficient in a world where systems play a central role. Instead, regulation must address the complexity and interconnectedness of modern AI infrastructures.

The study also points to the importance of interdisciplinary research in advancing understanding of AI systems. Insights from fields such as philosophy, software studies, and critical theory can help illuminate the underlying dynamics of these systems and provide new perspectives on their implications.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback