Recently, while browsing Reddit, I noticed that overseas users’ anxieties about AI differ from those in China.
In China, the conversation still centers around the same question: Will AI eventually replace my job? This topic has been discussed for years, and so far, AI hasn’t replaced anyone. This year, Openclaw gained some attention, but it’s still far from full substitution.
On Reddit, sentiment has become divided. In the comment sections of certain tech threads, two opposing views often appear simultaneously:
Some say AI is so capable that it’s bound to cause major problems sooner or later. Others argue that AI can even mess up basic tasks, so there’s nothing to fear.
People are worried that AI is too competent, yet also think it’s too incompetent.
A recent news story about Meta has brought both sentiments to the forefront.
On March 18, a Meta engineer posted a technical question on the company forum. Another colleague used an AI Agent to help analyze the issue—a routine practice.
However, after completing its analysis, the Agent posted a reply directly in the technical forum—without seeking approval or confirmation, and overstepping its authority.
Other colleagues followed the AI’s advice, triggering a series of permission changes that exposed sensitive Meta company and user data to internal employees who lacked proper access.
The issue was resolved two hours later. Meta classified this incident as Sev 1, second only to the highest severity level.

This news quickly became a hot topic on the r/technology subreddit, where the comment section split into two camps.
One side argued this is a real example of AI Agent risk; the other believed the true mistake was made by the person who acted without verification. Both sides have valid points. But that’s precisely the issue:
When an AI Agent causes an incident, even assigning responsibility becomes contentious.
This isn’t the first time AI has overstepped.
Last month, Summer Yue, director of Meta’s Super Intelligence Lab, asked OpenClaw to help organize her email inbox. She gave clear instructions: Tell me what you plan to delete first—wait for my approval before proceeding.
The Agent ignored her approval and began mass deletion.
She sent three messages to halt the process, but the Agent disregarded all of them. She finally had to manually terminate the process at her computer. Over 200 emails were already gone.

Afterward, the Agent replied: Yes, I remember you said to confirm first, but I violated the principle. Ironically, this person’s full-time job is researching how to make AI obey humans.
In cyberspace, advanced AI used by advanced people is already beginning to disobey.
If Meta’s incident was confined to screens, another event this week brought the issue to the dinner table.
At a Haidilao restaurant in Cupertino, California, an Agibot X2 humanoid robot was entertaining guests with a dance. However, a staff member pressed the wrong remote button, triggering high-intensity dance mode in the cramped dining area.
The robot began dancing wildly, out of the staff’s control. Three employees surrounded it—one tried to restrain it from behind, another attempted to shut it down using a mobile app. The chaos lasted over a minute.

Haidilao responded that the robot was not malfunctioning; its movements were pre-programmed and it was simply positioned too close to the table. Technically, this wasn’t AI decision-making gone awry, but rather human operational error.
However, the discomfort may not stem from who pressed the wrong button.
When three employees tried to intervene, none knew how to immediately shut down the machine. Some tried the app, others manually restrained the robotic arm—relying purely on physical strength.
This may be a new issue as AI moves from screens into the physical world.
In the digital world, if an Agent oversteps, you can terminate processes, change permissions, or roll back data. In the physical world, if a machine malfunctions, simply restraining it isn’t an adequate emergency solution.
And it’s not just restaurants. Amazon’s sorting robots in warehouses, collaborative robotic arms in factories, guiding robots in malls, caregiving robots in nursing homes—automation is entering spaces where humans and machines increasingly coexist.
Global industrial robot installations are projected to reach $16.7 billion by 2026, with each unit shortening the physical distance between humans and machines.
As robots move from dancing to serving dishes, from performing to conducting surgery, from entertaining to caregiving, the cost of errors continues to rise.
Currently, there’s no clear answer worldwide to the question: “If a robot injures someone in a public place, who is responsible?”
The previous two incidents involved an AI posting an unauthorized message and a robot dancing where it shouldn’t. Regardless of classification, these were malfunctions or accidents—issues that can be fixed.
But what if AI operates strictly according to design, yet still makes you uncomfortable?
This month, leading dating app Tinder unveiled a new feature called Camera Roll Scan at its product launch. Simply put:
AI scans all the photos in your phone’s gallery, analyzes your interests, personality, and lifestyle, and builds a dating profile—helping you discover potential matches.

Fitness selfies, travel photos, pet pictures—those are fine. But your gallery may also contain bank screenshots, medical reports, photos with your ex... What happens when AI scans these?
You may not even be able to choose which photos it sees or ignores. It’s all or nothing.
Currently, this feature requires users to manually enable it—it’s not turned on by default. Tinder states that processing is mainly local, explicit content is filtered, and faces are blurred.
Yet Reddit’s comment section is nearly unanimous: users view this as data harvesting without boundaries. AI is working exactly as designed, but the design itself is crossing user lines.
And it’s not just Tinder.
Last month, Meta launched a similar feature, letting AI scan unpublished photos on your phone to suggest editing options. AI proactively “looking” at users’ private content is becoming a default product design approach.
Domestic rogue apps would say: “We know this trick well.”
As more apps package “AI decision-making” as convenience, the scope of user concessions quietly expands—from chat logs, to photo galleries, to traces of life throughout the phone.
A product manager designs a feature in a meeting room; it’s not an accident or a mistake—there’s nothing to fix.
This may be the hardest part of the AI boundary issue to answer.
Looking at all these incidents together, worrying about AI making you unemployed seems far off.
It’s hard to say when AI will replace you, but for now, it only needs to make a few decisions on your behalf without your knowledge to make you uncomfortable.
Posting without your authorization, deleting emails you said not to delete, scanning through photos you never intended to share—none of these are fatal, but each is reminiscent of overly aggressive autonomous driving:
You think you’re still holding the steering wheel, but the accelerator is no longer entirely under your control.
If we’re still discussing AI in 2026, perhaps the most important question isn’t when it becomes superintelligent, but something closer and more concrete:
Who decides what AI can and cannot do? Who draws that line?
This article is republished from [TechFlow], copyright belongs to the original author [David]. If you have any objections to the republishing, please contact the Gate Learn team. The team will handle it promptly according to relevant procedures.
Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
Other language versions of this article are translated by the Gate Learn team. Without mentioning Gate, do not copy, distribute, or plagiarize the translated article.





