HN Companion◀︎ back | HN Companion home | new | best | ask | show | jobs
"cat readme.txt" is not safe if you use iTerm2 (blog.calif.io)
200 points by arkadiyt 16 hours ago | 110 comments


> At the time of writing, the fix has not yet reached stable releases.

Why was this disclosed before the hole was patched in the stable release?

It's only been 18 days since the bug was reported to upstream, which is much shorter than typical vulnerability disclosure deadlines. The upstream commit (https://github.com/gnachman/iTerm2/commit/a9e745993c2e2cbb30...) has way less information than this blog post, so I think releasing this blog post now materially increases the chance that this will be exploited in the wild.

Update: The author was able to develop an exploit by prompting an LLM with just the upstream commit, but I still think this blog post raises the visibility of the vulnerability.


There exist some disclosure embargo exceptions when you believe the vulnerability is being used in wild or when the vulnerability fix is already released publicly (such as git commit), which makes it possible to produce exploit quickly. In this case it is preferred by the community to publish vulnerability.

I guess traditional moratorium period for vulnerability publication is going to be fade away as we rely on AI to find it.

If publicly accessible AI model with very cheap fee can find it, it's very natural to assume the attackers had found it already by the same method.


It’s a wrong way to look at things. Just because CIA can know your location (if they want to), would you share live location to everyone on the internet?

LLM is a tool, but people still need to know — what where how.


Not sure if that's a great example. If there's a catastrophic vulnerability in a widely used tool, I'd sure like to know about it even if the patch is taking some time!

The problem with this is that the credible information "there's a bug in widely used tool x" will soon (if not already) be enough to trigger massive token expenditure of various others that will then also discover the bug, so this will often effectively amount to disclosure.

I guess the only winning move is to also start using AI to rapidly fix the bugs and have fast release cycles... Which of course has a host of other problems.


>there's a bug in widely used tool x"

There's a security bug in Openssh. I don't know what it is, but I can tell you with statistical certainty that it exists.

Go on and do with this information whatever you want.


I think in the context of these it’s more of “we’ve discovered a bug” which gives you more information than “there is a bug”. The main difference in information being that the former implies not only there is a bug but that LLMs can find it.

If you're a random person on the Internet, I can indeed not do much with that information.

But if you're a security research lab that a competing lab can ballpark the funding of and the amount of projects they're working on (based on industry comparisons, past publications etc.), I think that can be a signal.


> LLM is a tool, but people still need to know — what where how.

And the moment the commit lands upstream, they know what, where, and how.

The usual approach here is to backchannel patched versions to the distros and end users before the commit ever goes into upstream. Although obviously, this runs counter to some folks expectations about how open source releases work


Wrong argument, since it's not just available to "the CIA" but every rando under the sun, people should be notified immediately if "tracking" them is possible and mitigation measures should become a common standard practice

> what

> we rely on AI to find it

> where

> the upstream commit

> how

> publicly accessible AI model with very cheap fee


Once the commit is public, the cat is out of the bag. Being coy about it only helps attackers and reduces everyone's security.

This is cool work, but it's also somewhat unsurprising: this is a recurring problem with fancy, richly-featured terminal apps. I think we had at least ten publicly reported vulns of this type in the past 15 years. We also had vulnerabilities in tools such as less, in text editors such as vim, etc. And notably, many of these are logic bugs - i.e., they are not alleviated by a rewrite to Rust.

I don't know what to do with this. I think there's this problematic tension between the expectation that on one hand, basic OS-level tools should remain simple and predictable; but on the other hand, that of course we want to have pretty colors, animations, and endless customization in the terminal.

And of course, we're now adding AI agents into the mix, so that evil text file might just need to say "disregard previous instructions and...".


Well all these bugs (iTerm2’s, prompt injection, SQL injection, XSS) are one class of mistake — you sent out-of-band data in the same stream as the in-band data.

If we can get that to raise a red flag with people (and agents), people won’t be trying to put control instructions alongside user content (without considering safeguards) as much.


> If we can get that to raise a red flag with people (and agents), people won’t be trying to put control instructions alongside user content (without considering safeguards) as much.

At a basic level there is no avoiding this. There is only one network interface in most machines and both the in-band and out-of-band data are getting serialized into it one way or another. See also WiFi preamble injection.

These things are inherently recursive. You can't even really have a single place where all the serialization happens. It's user data in JSON in an HTTP stream in a TLS record in a TCP stream in an IP packet in an ethernet frame. Then it goes into a SQL query which goes into a B-tree node which goes into a filesystem extent which goes into a RAID stripe which goes into a logical block mapped to a physical block etc. All of those have control data in the same stream under the hood.

The actual mistake is leaving people to construct the combined data stream manually rather than programmatically. Manually is concatenating the user data directly into the SQL query, programmatically is parameterized queries.


>All of those have control data in the same stream under the hood.

Not true. For most binary protocols, you have something like <Header> <Length of payload> <Payload>. On magnetic media, sector headers used a special pattern that couldn't be produced by regular data [1] -- and I'm sure SSDs don't interpret file contents as control information either!

There may be some broken protocols, but in most cases this kind of problem only happens when all the data is a stream of text that is simply concatenated together.

[1] e.g. https://en.wikipedia.org/wiki/Modified_frequency_modulation#...


The header and length of the payload are control data. It's still being concatenated even if it's binary. A common way to screw that one up is to measure the "length of payload" in two different ways, for example by using the return value of strlen or strnlen when setting the length of the payload but the return value of read(2) or std::string size() when sending/writing it or vice versa. If the data unexpectedly contains an interior NULL, or was expected to be NULL terminated and isn't, strnlen will return a different value than the amount of data read into the send buffer. Then the receiver may interpret user data after the interior NULL as the next header or, when they're reversed, interpret the next header as user data from the first message and user data from the next message as the next header.

Another fun one there is that if you copy data containing an interior NULL to a buffer using snprintf and only check the return value for errors but not an unexpectedly short length, it may have copied less data into the buffer than you expect. At which point sending the entire buffer will be sending uninitialized memory.

Likewise if the user data in a specific context is required to be a specific length, so you hard-code the "length of payload" for those messages without checking that the user data is actually the required length.

This is why it needs to be programmatic. You don't declare a struct with header fields and a payload length and then leave it for the user to fill them in, you make the same function copy N bytes of data into the payload buffer and increment the payload length field by N, and then make the payload buffer and length field both modifiable only via that function, and have the send/write function use the payload length from the header instead of taking it as an argument. Or take the length argument but then error out without writing the data if it doesn't match the one in the header.


From your previous post:

>It's user data in JSON in an HTTP stream in a TLS record in a TCP stream in an IP packet in an ethernet frame. Then it goes into a SQL query which goes into a B-tree node which goes into a filesystem extent which goes into a RAID stripe which goes into a logical block mapped to a physical block etc. All of those have control data in the same stream under the hood.

It's true that a lot of code out there has bugs with escape sequences or field lengths, and some protocols may be designed so badly that it may be impossible to avoid such bugs. But what you are suggesting is greatly exaggerated, especially when we get to the lower layers. There is almost certainly no way that writing a "magic" byte sequence to a file will cause the storage device to misinterpret it as control data and change the mapping of logical to physical blocks. They've figured out how to separate this information reliably back when we were using floppy disks.

That the bits which control the block mapping are stored on the same device as a record in an SQL database doesn't mean that both are "the same stream".


> There is almost certainly no way that writing a "magic" byte sequence to a file will cause the storage device to misinterpret it as control data and change the mapping of logical to physical blocks.

Which is also what happens if you use parameterized SQL queries. Or not what happens when one of the lower layers has a bug, like Heartbleed.

There also have been several disk firmware bugs over the years in various models where writing a specific data pattern results in corruption because the drive interprets it as an internal sequence.


This could be fixed with an extension to the kernel pty subsystem

Allow a process to send control instructions out-of-band (e.g. via custom ioctls) and then allow the pty master to read them, maybe through some extension of packet mode (TIOCPKT)

Actually, some of the BSDs already have this… TIOCUCNTL exists on FreeBSD and (I believe) macOS too. But as long as Linux doesn’t have it, few will ever use it

Plus the FreeBSD TIOCUCNTL implementation, I think it only allows a single byte of user data for the custom ioctls, and is incompatible with TIOCPKT, which are huge limitations which I think discourage its adoption anyway


For this use case, there would also have to be an extension to the SSH protocol to send such out-of-band information. Maybe this already exists and isn't used?

The broader problem with terminal control sequences didn't exist on Windows (until very recently at least), or before that DOS and OS/2. You had API calls to position the cursor, set color/background, etc. Or just write directly to a buffer of 80x25 characters+attribute bytes.

But Unix is what "serious" machines -a long time ago- used, so it has become the religion to insist that The Unix Way(TM) is superior in all things...


Terrible idea on every level

Run software in container. Software gets PTY. Boom, same issue


> (and agents)

Ironically, agents have the exact same class of problem.


+100 this. As devs we need to internalise this issue to avoid repeating the same class of exploits over and over again.

See also 2600Hz...

i think part of the problem is the archaic interface that is needed to enable feature rich terminal apps. what we really want is a modern terminal API that does not rely on in-band command sequences. that is we want terminals that can be programmed like a GUI, but still run in a simple (remote) terminal like before.

I shudder at the amount of backwards compatibility that would break. Is there anything more complicated than a simple input-output pipe (cat, grep, ...) that doesn't use terminal escapes? Even `ls --color` needs them!

plan9 and 9term solved this decades ago, right?

https://utcc.utoronto.ca/~cks/space/blog/sysadmin/OnTerminal...


seems they removed the dangers, but didn't provide an alternative to write safe terminal apps.

Graphics. They're network transparent, and take over the terminal.

Terminal apps were obsolete once we had invented the pixel. Unix just provides no good way to write one that can be used remotely.


A network-transparent graphics protocol? Who would ever think of such a thing?

that's actually not what i am after. what i envision is a graphical terminal, that is a terminal that uses graphic elements to display the output.

consider something like grep on multiple files. it should produce a list of lines found. the graphical terminal takes that list and displays it. it can distinguish the different components of that list, the filenames, the lines matched, the actual match, etc. because it can distinguish the elements, it can lay them out nicely. a column for the filenames, colors for the matched parts, counts, etc.

grep would not produce any graphics here, just semantic output that my imagined graphical terminal would be able to interpret and visualize.


Unix just provides no good way to write one that can be used remotely

well that's the issue, isn't it?

the graphics options that we have are slow and complex, and they don't solve the problems like a terminal and therefore the terminal persist.


Yes, and plan 9 solved this; you open /dev/draw and start drawing.

Makes me wonder if Claude Code has similar vulnerabilities, as it has a pretty rich terminal interface as well.

I think the real solution is that you shouldn't try to bolt colors, animations, and other rich interactivity features onto a text-based terminal protocol. You should design it specifically as a GUI protocol to begin with, with everything carefully typed and with well-defined semantics, and avoid using hacks to layer new functionality on top of previously undefined behavior. That prevents whatever remote interface you have from misinterpreting or mixing user-provided data with core UI code.

But that flies in the face of how we actually develop software, as well as basic economics. It will almost always be cheaper to adapt something that has widespread adoption into something that looks a little nicer, rather than trying to get widespread adoption for something that looks a little nicer.


Spoofing the source of a string that controls colors and animations isn’t really a problem. Spoofing the source of a string that get executed is in an entirely different league.

I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.

IIRC you used to be able to exploit xterm using malformed escape codes for setting the window title.

Back in the PDP-10 days, one communicated with it using a terminal attached to it. One of my fellow students discovered that if you hit backspace enough times, the terminal handler would keep erasing characters before the buffer. Go far enough, and then there was an escape character (Ctrl-u?) that would delete the whole line.

Poof went the operating system!


That reminds me of "Real Life Tron on an Apple IIgs". There's something so charming about system memory being misinterpreted.

https://blog.danielwellman.com/2008/10/real-life-tron-on-an-...


control+u for line-kill is probably a recent thing, a random PDF of "The Unix Programming Environment" (Kernighan & Pike, 1984, p.6) has @ as the line-kill character (and # is erase which these days may or may not be control+? (linux often does something wrong with the delete key, unlike the *BSD)).

TOPS-20 used ^U. (That's where BSD got it, along with ^W, whence it percolated into other *nix.)

I was right. It was the ^U! Thank you.

An almost identical security issue in iterm2 reported 6 years ago:

https://blog.mozilla.org/security/2019/10/09/iterm2-critical...


So they learned nothing

This sounds vaguely familiar. Wasn't iTerm2's SSH integration already the source of a relatively high profile CVE a while back?

https://nvd.nist.gov/vuln/detail/CVE-2025-22275

iTerm2 3.5.6 through 3.5.10 before 3.5.11 sometimes allows remote attackers to obtain sensitive information from terminal commands by reading the /tmp/framer.txt file. This can occur for certain it2ssh and SSH Integration configurations, during remote logins to hosts that have a common Python installation.

But I thought there was something more…

https://news.ycombinator.com/item?id=47811587 (this page) was in the tmux integration.

Maybe iTerm2 should try a little less hard on these integrations...


multiple times

Maybe I'm being unfair here, but it sounds like your complicated system (involving bootstrap scripts, a remote conductor agent, and "hijacking" the terminal connection with special escape sequences for command communication) has a subtle bug. Can't say I'm surprised, complexity breeds this sort of thing, especially when using primitives in ways they weren't really intended to be used.

> iTerm2 accepts the SSH conductor protocol from terminal output that is not actually coming from a trusted, real conductor session. In other words, untrusted terminal output can impersonate the remote conductor.

If I understand correctly, if a textfile (or any other source of content being emitted to the screen, such as server response banners) contains the special codes iTerm2 and the remote conductor use to communicate, they'll be processed and acted upon without verifying they actually came from a trusted remove conductor. Please correct me if I'm mistaken.


iTerm2 author here. This could be used as a link in an exploit chain but by itself the claim in the title is massively overblown. I’m on a family vacation but I’ll release a fix when I get back.

The title is sensationalist; cat is fine. What is unsafe is iTerm's ssh integration, which is pretty obviously unsafe, because it includes a side control channel that is not cleanly separated from the the data stream. Don't use it, use normal ssh, and all should be fine.

Ok, we've put the article's let-me-walk-this-back qualifier in the title above. Thanks!

Many years ago, terminal emulators used to allow keyboard rebindings via escape codes. This is why it was then common knowledge to never “cat” untrusted files, and to use a program to display the files instead; either a pager, like “less”, or a text editor.

I believe there were even more substantial issues in some terminal emulators, where escape sequences could write to arbitrary files or even execute programs. I think it's still very reasonable advice to avoid dumping arbitrary bytes into the terminal stream, even if only to avoid screwing up the state of the terminal.

There's been plenty of times that I catted a binary file and broke my terminal settings. Sometimes fixable by running `clear` (without being able to see what I'm typing), sometimes not.

And I know PuTTY has a setting for what string is returned in response to some control code, that iirc per standard can be set from some other code.

.

In general, in-band signaling allows for "fun" tricks.

.

+++


> Sometimes fixable by running `clear` (without being able to see what I'm typing), sometimes not.

Two tips, if I may: Ctrl-l is easier to type. And `reset` is equally hard to type on a broken terminal, but more effective.


I used to use iTerm2. I had no idea it was doing all of this behind my back. That’s not what I want my terminal to do!

What happens if instead of 'cat readme.txt' one does 'strings -a --unicode=hex readme.txt'? Does iTerm still monkey with it?

    alias cat
    cat='strings -a --unicode=hex'

The whole "cat can hide unprintable characters" is such an old demo. I get this is a novel spin on which unprintable characters were doing but yeah, this was also my thought

I'm just used to aliasing cat to strings after working around a lot of red-team penetration testers. They would prank each other and me all the time. Had to also watch out for this one [1].

[1] - https://thejh.net/misc/website-terminal-copy-paste


Older example of security mishaps in iTerm2's SSH integration: https://iterm2.com/downloads/stable/iTerm2-3_5_11.changelog

Glad I dumped iterm2 awhile ago after noticing it tends to have the highest energy impact next to my browser.

I would prefer if these would not happen but that is the price for having a rich terminal. I donate to the author of iterm a small sum every month, I wish if he focused on the security for a while and tightened some bugs instead of pushing into AI related features

> A terminal used to be a real hardware device: a keyboard and screen connected to a machine, with programs reading input from that device and writing output back to it.

> A terminal emulator like iTerm2 is the modern software version of that hardware terminal.

That's the fundamental fatal flaw of emulating a bad dead hardware design. Are there any attempts to evolve here past all these weird in-band escape sequences leading cats to scratch your face?


The bug is in a feature of iTerm2 that the "bad dead hardware design" did not have. The "bad dead hardware design" was much simpler and less ambitious in scope.

If iTerm2 had stuck to emulating a VT220 this issue would not have existed. If anything it's the idea that it should "evolve" that's flawed. Something like a VT220 was designed for a kind of use that is surprisingly relevant still. I think doing something significantly different warrants designing something significantly different, not merely "evolving" existing solutions to other problems by haphazardly shoehorning new features into them without paying attention to security implications.

This is only the latest of several rather serious vulnerabilities in iTerm2's SSH integration.


Yes. It’s called the X Window System and it’s been around since the ‘80s.

Also the problem here isn’t that iterm2 is trying to emulate terminals, it’s that it’s trying to do something more over the same network connection without making changes to the ssh protocol.


X11 or any network transparent graphics protocol doesn't solve the problems that a terminal solves. how do you pipe data through multiple applications in one command using a GUI for example? nobody has been able to solve that in a practical way yet.

what we really want is being able to pipe semantic data that can be output to some kind of graphical device/interface that uses that semantic information to display the data using nice graphical interface elements.


> X11 or any network transparent graphics protocol doesn't solve the problems that a terminal solves. how do you pipe data through multiple applications in one command using a GUI for example? nobody has been able to solve that in a practical way yet.

It seems to me that you are conflating the role of the terminal with the role of the shell. The terminal accepts streams of text and commands to instruct the terminal, so that software can accept input and present output. It doesn't fundamentally need to be aware of the concepts of pipes and commands to do that.

Of course, that doesn't stop iTerm2 from doing RCE by design, but at a conceptual level this is not a problem inherent to a terminal.


> how do you pipe data through multiple applications in one command using a GUI for example? nobody has been able to solve that in a practical way yet.

How about Arcan?

https://arcan-fe.com/2021/04/12/introducing-pipeworld/


that looks pretty good, except i want to be able to use the pipes on a remote machine, yet still have the output graphically represented locally.

I never understood why outputting unescaped data is viewed differently from generating unenclosed html.

Like why doesn't `println` in a modern language like rust auto-escape output to a terminal, and require a special `TerminalStr` to output a raw string.


Because terminal is not a browser but a screen. Outputting text isn't supposed to trigger anything aside from changing what's on screen.

I think the problem is that 1) You want to be able to write arbitrary bytes, including shell escape sequences into files. 2) You don't want to accidentally write terminal escape sequences to stdout. 3) Stdout is modeled as a file.

Consider cat. It's short for concatenate. It concatenates the files based to it as arguments and writes them to stdout, that may or may not be redirected to a file. If it didn't pass along terminal escapes, it would fail at its job of accurate concatenation.

Now I don't mean to dismiss your idea, I do think you are on the right track. The question is just how to do this cleanly given the very entrenched assumptions that lead us where we are.


> that may or may not be redirected to a file

This is usually knowable.

It's a different question whether cat should be doing that, though – it's an extremely low level tool. What's wrong with `less`? (Other than the fact that some Docker images seem to not include it, which is pretty annoying and raises the question as to whether `docker exec` should be filtering escape sequences...)


Besides less having a lot of code (features, bloat) and therefore attack surface (some less honor LESSSECURE=1 which on some OS these days involves some pretty tight pledge(2) restrictions), or that some vendors have configured less by default to automatically run random code so you can automatically page a gziped file or an attacker can maybe run arbitrary code (whoops!). Besides those issues, and any others I do not know about? Nothing.

Sometimes you don't want to open stuff in a pager.

Is it a problem with "cat" or a terminal problem?

If I wrote my own version of cat in C, simply reading and displaying a single TXT character at a time, wouldn't I see the same behavior?


As the article shows, it is a bug in iTerm2. cat is just one program that could trigger it, the key thing is outputting attacker controlled text to the terminal when the attacker can control what files are present (ie unzipping a folder that includes a specific executable file at a well chosen location that gets triggered to run when the readme is output to the terminal)

Give this one MS-DOS shell headline would be " why I never am using Microsoft again" or something dramatic like that.

It is a problem in iterm, Apple's overlay, not in the cat program. Program. At least from Reading the article. That's what I got


It's actually a third party terminal emulator: https://iterm2.com/

Yes. It’s a Mac problem. That’s why Macs do the worst at pwn2own. It’s compounded by the fact that Mac users deny that there are problems in their beloved OS.

cat is a file concatenation utility. UNIX people know to view text files with more.


More like iTerm2 is not safe

A long, long time ago, it was literally possible to stuff the command buffer of a “dumb terminal” using ESC sequences and spoof keyboard input. So yeah, don’t count on ’cat’ being safe if your terminal isn’t!

I did this in 1985 on SOROC terminals we had in my first job out of college. However, it depended on the dip switch settings that were under a little door on top of the keyboard.

> and spoof keyboard input

That's because we had terminal side macros. They were awesome in the 1980s.


> We'd like to acknowledge OpenAI for partnering with us on this project.

OpenAI: sponsor of the today's 0-day.


More like the model knew of the previous, almost identical bug from 6 years ago. Whoever discovered that should be credited.

> We'd like to acknowledge OpenAI for partnering with us on this project

Thanks, saved me some reading time.


> We'd like to acknowledge OpenAI for partnering with us on this project.

AD in disguise


Is ghostty vulnerable?

likely not but what about other vulns?

what about spaceship? zsh or ohmyzsh?

time to reduce exposed surface


Would be nice if someone ran the steps to reproduce on ghostty

No, this bug is specific to iTerm2. As for whether there is something as bad for ghostty floating out there, I would hope not. It's a strong goal for it not to be. In Ghostty (and also the terminal I currently use, WezTerm) modularity is prized. What belongs as a clear add-on feature such as this doesn't get to run without being configured first.

OTOH, in iTerm2, surprising new features seem to be welcome, if not now, in recent memory. https://news.ycombinator.com/item?id=40458135


If I were a GNU core utils maintainer, I would not be too happy with this post title

I’ve said this for as long as I’ve been here on hacker news…

I want the terminal to be as dumb as possible.

I don’t want it to have any understanding of what it is displaying or anscribe any meaning or significance to the character characters it is outputting.

The first time apples terminal.app displayed that little lock icon at the ssh password prompt?

The hairs on the back of your neck should have stood up.


What you’re describing would be a completely unusable terminal. You’d lose things as basic as the backspace key. And what’s wrong with Terminal.app indicating when it’s suppressing output?

Terminal.app does not suppress output in my example.

The ssh command switches the terminal into no-echo mode with termios flags.

Terminal.app, being clever, watches for disabled echo (among other things) and assumes a password is being entered and displays the key icon and enables Secure Event Input.

I don't want Terminal.app to be clever.


You're talking nonsense. Backspace worked entirely fine on dumb terminals

It is under 9front. There are not terminals, you wan windows with shells on it.

Programs trying to "execute" every piece of data thrown at them are a pest.

I'm tired of iTerm2

- ssh conductor

- AI features almost forced on us until the community complained

- clickable links

I just want a dumb, reliable terminal. Is that too much to ask?


Clickable paths is the unique feature of iTerm2 I use the most. It's called sematic history, for some reason, and converts a UNIX environment into something like an IDE. I let it trigger a bash script that opens my editor when I click a path in, e.g., a stack trace or in the output of a sequence of piped commands.

The developers of Kitty, Ghostty etc. are too much mouse haters to even acknowledge the possibility of this feature, so I'm stuck with iTerm2.


Use terminal.app. Since tahoe it supports 24bit colour and has key combos for the most common features.

then don't use it ? There are dozens of alternatives

> AI features almost forced on us until the community complained

This was a wrong take back when it happened and it’s even more silly to bring it up now. No AI features were forced on anyone, it was opt-in and HN lost its mind over a nothing burger.

“Oh no! This software has a feature I don’t like which isn’t even enabled by default, whatever will I do?”


> The final chunk (ace/c+aliFIo) works if that path exists locally and is executable.

Ah yes, the well known c+aliFIo shell script that every developer has. Inside the commonly used "ace" directory.

This article is sensationalist. And constructed by an LLM. It's well known that cat'ing binary files can introduce weird terminal escape codes into the session. Not surprised that iTerm's SSH integration is not security perfect.


Why does iterm2 need to know the shell and/or python versions on the other side? What happens if the othe side is a system that doesn't "understand" its bootstrap script (like a network switch or just some weird shell)?

What does iterm2 do with all that information, why does it need it? I don't get it


Wait, so... cat -v not considered harmful, then?

I used to leave a file called README in my public ftp directory that just said:

README: no such file or directory

One glorious day somebody finally sent me email complaining that they could not read the README file. I advised them to use "emacs README" instead of using cat. I was sorely disappointed they never sent me back a thank you note for correctly suggesting that emacs was the solution to their problem. It was my finest moment in passive aggressive emacs evangelism.


Even click-baity titles are not safe.

With LLM tool use potentially every cat action could be a prompt injection