I just read an interesting article that connects the shell to ChatGPT. Just type a request in plain English, and “ShellGPT” translates it into a Linux command with appropriate options & arguments. It’ll even run the command automatically, without review, if you ask.
This article unintentionally reveals risks of this kind of setup. Here’s a request that the author provided:
“Make all files in the current directory read-only”
ShellGPT dutifully translates this request to:
chmod -R a-w .
Holy crap. There are several things wrong here because the request was ambiguous.
- Does “all files in the current directory” mean just the current directory or all subdirectories recursively? ShellGPT chooses the second, riskier meaning.
- Does “read-only” for files mean no permissions except read permission (
r--
) or does it mean unwritable (sor-x
would be OK for shell scripts)? ShellGPT assumes the second meaning.
So, if the poor user meant chmod 400 *
and ShellGPT runs chmod -R a-w .
, we’ve not only set the permissions wrong, we’ve overwritten the permissions for every file in the entire tree.
Sure, the user can review the command before it runs, but if you already have the knowledge to do that, you can skip the English and type the command.
Interested to hear what others have to say.