(Source: "China Economic Times" 2026-03-03)
Editor’s note:With the accelerated implementation of autonomous AI agents and scenario-based applications, human-machine collaboration and multi-agent collaboration are profoundly reshaping production and lifestyle. With the capabilities of autonomous decision-making, continuous learning, and cross-domain execution, intelligent agents not only greatly improve operational efficiency, but also bring new challenges such as definition of rights and responsibilities, ethical norms, and data security. Currently, the focus of AI governance is shifting from content control to behavioral regulation. The key is to balance innovation vitality and safety bottom line. To this end, this newspaper interviewed a number of experts and scholars in the industry to jointly explore effective paths for the standardized development of AI agents.
——Interview with the Director of the Digital Economy and Legal Innovation Research Center of the w88 casino
■China Economic Times reporter Wang Caina
Currently, AI agents are intervening in people's work and life at an unprecedented speed: automatically organizing information, handling complex operations, and even making decisions for people. However, as intelligent agents move from tools to partners, their autonomy brings more than just convenience. How is personal information collected? Who is responsible for the behavior of the agent? Is the AI agent a "private steward" or a "potential owner"? In this regard, a reporter from China Economic Times recently interviewed Xu Xu, director of the Digital Economy and Legal Innovation Research Center of the w88 casino.
Personal Information Protection: Core Obligations of Agent Providers
China Economic Times: In the service model of AI agents, users often exchange personal information for related services. Compared with conventional scenarios, what are the particularities of AI agents in processing personal information? How should providers respond?
Permission: In the digital age, it has become normal for users to exchange personal information for agent services, which also makes the protection of personal information a core obligation of agent providers. Compared with conventional processors, the challenges faced by agent providers are mainly reflected in the following four aspects. First, providers have multiple legal roles. It may form a joint processor with the terminal manufacturer and assume joint and several liability; it may also act as a trustee and need to perform security guarantee and review reminder obligations. Secondly, the information collection methods are more diverse. In addition to direct collection, agents can also collect information indirectly through tools such as API calls and screen captures, and even obtain environmental data in real time. Again, the types of information collected are complex. Not only does it involve sensitive information such as account numbers, payments, and health, but it also collects fine-grained telemetry data. The autonomy of the intelligent agent will expand the scope of information processing on its own. Finally, the collection subject can easily “spill over” to a third party. During the processing process, the agent will inevitably touch other people's networked and group personal information.
In response to these challenges, providers need to make multi-dimensional efforts: adopt design-based protection, prioritize end-side processing, and limit sensitive link operations; improve informed consent design and adopt a hierarchical, dynamic revocable consent structure; adhere to the trustee position when processing third-party information; improve the minimum necessary principle from the technical and institutional levels, and establish a comprehensive judgment paradigm based on application scenarios.
Behavioral Boundaries and Responsibility: AI Agent ≠ User
China Economic Times: Some people think that AI agents are just upgraded versions of electronic agents, and their behavior can simply be regarded as an extension of user behavior. Do you agree? On the premise that intelligent agents do not have legal subject qualifications, since they cannot be simply equated, how can the legal rights and responsibilities of their actions be clarified through the "agency relationship"?
Permission: There are essential differences between AI agents and electronic agents. From the perspective of behavioral essence, electronic agents only execute preset rules and have no freedom of decision-making; AI agents are already autonomous and may output different results even with the same instructions. From the perspective of behavioral appearance, it is completely different from user behavior. User operations are anonymous, low-frequency, and have clear purposes; while AI agents often access 24/7 in an anonymous state at high frequencies, which not only impacts the network's real-name system, but also makes traditional frequency-based security monitoring methods ineffective. The differences are particularly significant in data acquisition and circulation. User data is accessed one by one and stored locally, so the risk is relatively controllable; while AI agents will capture data in batches after being authorized, and the scope may exceed user information. More importantly, these data are often transmitted to the cloud for aggregation, which may expand security risks from individuals to the whole.
Introducing agency relationships is the core path to clarify rights and responsibilities. To achieve legal and effective agency, four core norms must be strictly followed: First, a clear expression of intention to grant agency power. The authorization needs to be forwarded to the third party in a machine-readable form, stating the agency matters, authority, term and electronic signature. Second, the agency right must not exceed the user's own authority. User authorization is subject to both contract terms and technical standards. System-level permissions that users themselves do not have the right to exercise naturally cannot be granted to intelligent agents. Third, the agency matters must be agentable. Acts that are prohibited by law such as marriage and wills, as well as factual acts and illegal acts, cannot be based on "user authorization" as the legal basis. Fourth, the “principle of naming” must be followed. Agency activities should clearly inform the third party of the identity of the agent, and the identity of the machine must not be concealed to avoid platform rules.
Human-machine coexistence: Build a solid bottom line and safeguard human dominance
China Economic Times: As the autonomy of AI agents continues to increase, what is the core direction of its technological evolution? How do you judge the future situation of human-machine coexistence?
Permission: Looking into the future, the core development direction of AI agents is to move from single intelligence to group intelligence, and from the virtual world to the physical world. These two major leaps will allow it to achieve a qualitative improvement in its ability to solve complex problems. The so-called swarm intelligence means that multiple specialized agents form a collaborative ecosystem and jointly achieve complex goals through dynamic task decomposition, persistent memory, and coordinated autonomy. Instead of relying on a single all-purpose agent, large goals are divided into subtasks and assigned to different professional agents for execution, and then coordinated through communication protocols, shared memory, etc. Moving towards the physical world means that the agent evolves from being based on large language models to multi-modal interactions that integrate vision, language, and environmental data. Its core capabilities such as generalization and autonomy must be learned and optimized in interaction with the dynamic physical environment. The core is to build a world model that contains physical laws and causal relationships, which is equivalent to a simulator with a built-in external environment that can predict the results of different behaviors.
If AI agents develop to the L4 level, combined with group intelligence and the ability to interact with the physical world, I will be relatively pessimistic about the situation of human-machine coexistence. Ideally, AI agents are human beings' personal butlers, but the reality may be that humans gradually hand over control, because humans naturally pursue convenience and avoid thinking. When AI agents can handle all matters, people will easily hand over decision-making power to the agents, and even "find a servant, but become the master." This requires that we must now pay attention to the three levels of security governance. In addition to the current focus on its own instrumental security, we must also pay attention in advance to the two levels of autonomous controllability and prevention of being used for evil, so as to prevent AI agents from losing control. At the same time, it is necessary to maintain human autonomy and not let the decisions of intelligent agents completely replace human judgment. This also requires advance planning from various aspects such as technology, law, and industry regulations, so that AI agents can always develop under human control and maintain the bottom line of safety while innovating and developing.
Original link: