Risks of applying ChatGPT & large language models to the Linux shell

I just read an interesting article that connects the shell to ChatGPT. Just type a request in plain English, and “ShellGPT” translates it into a Linux command with appropriate options & arguments. It’ll even run the command automatically, without review, if you ask.

This article unintentionally reveals risks of this kind of setup. Here’s a request that the author provided:

“Make all files in the current directory read-only”

ShellGPT dutifully translates this request to:

chmod -R a-w .

Holy crap. There are several things wrong here because the request was ambiguous.

  1. Does “all files in the current directory” mean just the current directory or all subdirectories recursively? ShellGPT chooses the second, riskier meaning.
  2. Does “read-only” for files mean no permissions except read permission (r--) or does it mean unwritable (so r-x would be OK for shell scripts)? ShellGPT assumes the second meaning.

So, if the poor user meant chmod 400 * and ShellGPT runs chmod -R a-w ., we’ve not only set the permissions wrong, we’ve overwritten the permissions for every file in the entire tree.

Sure, the user can review the command before it runs, but if you already have the knowledge to do that, you can skip the English and type the command.

Interested to hear what others have to say.

3 Likes

It’s interesting to play with something like ShellGPT, but I’m not sure I’d turn it loose to automatically run whatever it translated my request into. If I used it for a few weeks and it was very accurate, then I could think about using an automatic execution. At least that seems prudent to me.

Hi @dbauthor ,
I think it is dangerous . A user could use it to get one command , as in your example… apply it and get the wrong result, … then try to correct that wrong result by asking a further question , and applying the second answer. … and so on recursively.
The chance of untangling the mess would be low… like an untested shell script with multiple errors.

A new user needs to learn… by studying examples, then applying them, one command at a time. With lots of checking.

The philosopy of ChatGPT is wrong. Getting instant packaged answeres to questions undermines the learning process by removing motivation.

1 Like