Asserting that something didn't happen is a test smell. There are an infinite number of things the test subject should not do. This code should not throw an exception. It should also not create a file. Nor should it make a sandwich, raise the Bat-Signal, or blow up Earth. These are all equally useless things to test for. It just doesn't give us any information to know what didn't happen.

Tests are a form of empiricism. They're science in action. In this case, the exception is a form of evidence. It's evidence that our module is behaving incorrectly. So what does it mean that we have no exception? Taking our instruction from science, it does not tell us that the module behaves correctly. The lack of exception tells us only that it doesn't appear to behave incorrectly in this specific way.

That's not necessarily useless information. But we don't write code to not do things. We write code to do things. If we explicitly need to not do a thing, that is typically because we need to do something else a little later. That something else is the thing we should test for. Not always, but often. And I would argue that in clean code, it is almost always the case.

Doing things creates output. So tests should assert on output. They should assert that the specific nature of the output correlates correctly to the specific nature of the input. In this case, exceptions are output. The lack of exception is not output. It's the lack of output. (By contrast, delegating to some dependency, writing to disk, etc. are a form of output... just indirect output from the "other side" of the module.)

Usually an output-less operation like this means our module is responsible for holding some hidden internal state, and usually that state will affect successive operations on the module. This is the output. Instead of testing that we didn't fail extravagantly at the midpoint, we should focus on testing the output of the successive operations, and ensuring they are correctly impacted by the earlier operation.

An astute reader would note that all of this is an indication of temporal coupling between the methods on the module and we should avoid the situation altogether. This is almost certainly true! But in my experience that juice is not always worth the squeeze. Sometimes that's because of the legacy of the application, and sometimes it's inherent in the problem being solved. Maybe you'll get to deal with that problem at the root someday, but let's assume it's not today. We can still walk away with a win if we write some good tests for the code in our hands today.