Perhaps you could start an RFC and we could all iterate on it to create a “proper” Stream API for Rust? I think you’ve pointed out a lot of really good ideas and points that should be addressed. I pretty much agree with your analysis, though, I would not currently have had the foresight to so succinctly categorize the issues.

std::io::Read doesn’t forces utf8. In fact, it does not imply any encoding - It’s just a stream of bytes. It can be a utf8 encoded text file from local disk, euc-kr encoded html from gunzip stream, or even a jpg encoded picture of kitten from the internet.
Read is used for low level abstraction in io context. It only cares bytes, because everything in memory are bytes! Arbitrary typed generic iterator, which should be std::iter::Iterator, can be constructed on top of it.
I think what makes yo…

Regarding Read and UCS-16: you always can write an extension trait which will implement convinience UCS-16 methods while using raw bytes IO under the hood. Should UCS-16 methods or methods which will accept different encodings be in the std? Personally I don’t think so, but it’s a good idea for a crate. (maybe it already exists?)

In tokio land, you’d implement this with a Decoder layered on top of a raw byte stream (at the lowest level, this is always the type of the stream). The decoder would turn the bytes into whatever higher level type you want, and consumers would work off streams that are decoded underneath. This is all type-safe and uses generics extensively, so gets the optimization/codegen benefits of that. You can then also take a decoded stream/sink and split it into a read and write halves, if you want to …

The problem of Iterator-only approach is, it doesn’t scale well to low-level. Rust is a system programming language. Common scenario of such io is memcpy incommimg bytes from os-managed buffer to my own, and parse that byte array to produce meaningful types. How can we model this operation with Iterator? Copy memory byte-by-byte is slower than memcpy over 10 times. Expose slice of internal buffer has lifetime issue as this buffer should be reused. Vec implies heap allocation for every read(), wh…

ripgrep supports searching either UTF-8 or UTF-16 seamlessly, via BOM sniffing. My Windows users appreciate this. The search implementation itself only cares about getting something that implements io::Read. UTF-16 handling works by implementing a shim for io::Read that transcodes UTF-16 to UTF-8. I did this in about less than a day’s worth of a work and it was well worth it.

First of all, I have to say that ripgrep is impressive work!! I’ve used it just recently because it smokes everything else if you need to trawl through gigabytes of data for a keyword.
The whole argument I’ve been trying to clumsily make above is that your hard work for things like BOM detection and encoding switching should have been built-in to Rust and not a part of the ripgrep codebase. At the end of the day, what you’ve written is a single-purpose tool, but large chunks of its codebase loo…

It really doesn’t. The transcoding is itself handled by a separate crate, and the shim itself isn’t specific to ripgrep and could be lifted into a separate crate. Any enterprising individual could accomplish that. ripgrep used to be much more monolithic, and I’ve been steadily moving pieces out into separate crates. The UTF-16 shim is one such candidate for moving into a separate crate, but nobody has put in the work to do it.
That’s false. UTF-16 is a variable width encoding (not all Unicode…

How would you implement this for a Read over f32? What if you’re trying to treat incoming measurement data as a stream of numbers, e.g.: for DSP-style programming? You can always use unimplemented!() or panic!(), but that’s really icky because then libraries all over the place will have to include code that can crash the process.
Or, you could always reinvent the general concept of streams for you special case.
Either way, eww…
There are certainly a lot of moving parts here…
There doesn’t…

None of my examples with ripgrep used memory maps, so I don’t know why you’re bringing that up. My shim for transcoding doesn’t assume the presence of a caller provided buffer, but it could and the code would be simpler but make more assumptions.
This conversation is going in circles and there is too much certainty in your comments for my taste. Our lack of shared experience is preventing us from communicating productively, and in particular, it’s pretty hard for me to grok everything you’re sa…

My proposal is a lower “denominator” than the current Read trait. In fact, now that I think about it, I was wrong in my earlier statement that it can’t be retrofitted into Rust because it’s inherently incompatible with what’s already there.
The exact opposite is true: It is a strict superset of std::io::Read, allowing it to implement the Read trait for the special case of u8. Meanwhile, the Read trait cannot implement the more elegant zero-copy trait, because:
It cannot read without consumin…

Please convince yourself that an implementation of Read2 for std::fs::File cannot have fewer copies than what the Read trait does. The type signatures alone already tell me that, and indeed, they directly imply that any implementation of Read2 for std::fs::File that uses standard read calls must necessarily maintain an internal buffer. This is what std::io::BufReader does for any implementation of Read, but Read does not require the use of an internal buffer and is thus more flexible.
You fund…

Note that you borrow self mutably and then (presumably) return slice to a buffer inside self. This will not work nicely, as you’ll have to drop &[Self::Data] before calling peek again, otherwise borrow checker will rightfully yell at you. Borrow regions could probably help here, but your proposal has another problem. How do you think it will work with e.g. buffered file IO? Also it will not work if underlying buffer is not owned without GAT.
No one forbids you from prototyping such zero-copy R…

This seems like a higher level API than Read. Non destructive peeking means the source has a buffer (either naturally or manufactured), whereas Read is just a stream. If you want to add a buffer on top and allow peeking, you can do that yourself (or use BufReader or BufRead trait in the API requirements). BufRead has a fill_buf/consume duo that can be used to do buffering and peeking, and then advancement.
I’m not really seeing an issue with Read being the lowest level API. It allows other…

IIUC the main issue with Read trait highlighted by @peter_bertok is that it’s not suited for zero-copy processing, e.g. when you use memory mapped files or have all data in memory already. It’s indeed an issue, e.g. in rosbag crate I had to write (almost) zero-copy parser myself (though something like nom could’ve been probably usefull), but I think that solution should be prototyped outside of the std (and maybe even stay outside of it) and that it’s currently blocked on implementation of the…

I’ve read through this thread [edit: most of it; see below] and the blogpost about the Pipelines interface, and I’m not quite sure I understand your current position on the stream-vs-pipes issue, @peter_bertok.
In light of this addendum:
Do you still think the Pipeline implementation in C# is an example of what you’d prefer to see in Rust?
Since Pipelines are implemented in C# with byte-streams, do you still think the Pipe concept is incompatible with a byte-stream API in principle?
You men…

If you have all data in memory and want zero copy, isn’t that what a &[u8] is for? It sounds to me like Pipelines (I didn’t read that much about it so I might be wrong) manages internal buffers for you, does whatever I/O to fill them, and then exposes slices into it while allowing you to indicate that you’re done with a slice. It sounds very much like ringbuffers used in networking. I agree it’s useful but it’s fundamentally a different API and usecase from Read, or so it seems to me.

I can’t. That’s not the point.
Obviously, both Read and Read2 would use exactly 1 copy in this scenario, because this scenario is a user-to-kernel call. The difference is that the Read2 API is free not to make the copy in other scenarios, such as memory-mapped files or user-mode networking. This allows the same abstraction – the exact same trait – to be the core of a much richer set of “streaming” code, not just traditional POSIX/Win32 file I/O.
Believe it or not, traditional file I/O is not …

I don’t think this blindly assumes a file can’t be greater than u32 (equiv to usize on 32-bit); rather, I think it says, “If I’m on a platform with a maximum (directly) addressable range of 32-bits, I can’t mmap a file bigger than that.” Granted, there are ways to do paging/windowing and such (as you’ve described), but, this is simply a wrapper around the platform’s mmap implementation, and that would not support anything to be mmappable greater than the usize for that platform.

Having a flip through the memmap-rs crate shows that in this case it’s technically correct to return Result<usize> based on the description of get_len(), but this is confusing for the user.
Files can be bigger than u32::MAX, and it’s always been possible to memory-map files bigger than 4 GB on 32-bit platforms, exactly the same way it’s been possible to read files of arbitrary size using streaming APIs since forever.
The “mapped window size”, and the “file size” are distinct concepts, only the…

That post seems to reaffirm that the mmap approach wouldn’t really be preferable to the stream approach unless the data in question is already in-memory on a fast hardware device. Since that’s not something that can generally be assumed, I still don’t see the existence of memory mapping as a good reason to avoid stream-based IO by default, especially in the case of networked or distributed systems (since data over IP by definition isn’t already in memory). Am I missing something, or is that not…

I would argue that BufRead/Read2 approach is always better, but this takes some insight into API design. An incredibly common “learning curve” I see goes like this:
Q: I did this I/O code! It’s slow! Can someone help please?
A: You’re using Read, but you’re doing too many kernel calls because you’re consuming a few bytes at a time. You should use BufReader. This “lesson” is right there in the doco in the first example.
This was the underlying root cause of poor performance of the pre-Firefox …

I’ve made a career fixing issues exactly like this, so I dunno… it may be hyperbole, but it’s effective. 8)
I’ve seen - repeatedly - clusters of servers worth millions running like slow molasses because someone used a too-small network buffer throughout the codebase. It was the default in C++. It’s going to be the default in Rust. It’s going to be slow too. It’s not complicated.

Firstly, the BufRead or Read2 traits are 100% stream-based I/O. The difference between those and Read is only that the buffer is “handed to the user” instead of being “passed in” AND that the source position is not forcibly advanced after the buffer is available to consume. This gives the API designers more flexibility if this is the default, and the user code is more elegant and more efficient by default.
There are a couple of interacting / composable use-cases here, with mmap just being one.…

I’m intrigued by what you have to say regarding this topic. I’ve been looking for something to dig my teeth into with respect to Rust that I felt was interesting. I’d like to spin-up a Git repo to begin working on this idea. I’d like your participation, if nothing else at least advisory, but, any collaboration would be appreciated.
Honestly, I see that as the only useful way to move forward on something like this. Continued discussion about it is probably not useful without starting to actually…

To be honest, I feel like I’m learning alongside with everyone else. 8)
I didn’t start out with the zero-copy thing as a goal, I only noticed that as a possibility after reading about the System.IO.Pipeline design.
I think a lot more research coupled with some experimental API tire-kicking is the best bet, and I would definitely seek the involvement of the tokio guys. Asynchronous I/O is usually used when performance matters the most, and zero copy = more performance!
I haven’t personally rea…

Have you looked at the mio? Also, see the book: https://legacy.gitbook.com/book/wycats/mio-book/details
EDIT: I knew about mio (in a trivial sense) before, but, I hadn’t yet spent much time digging into it. As I begin to, I think that starting any sort of project (as opposed to mio) is probably not useful (at least until I understand mio fully and am sure that it isn’t meeting, or at least intending/endeavoring to meet, the needs you have described).
EDIT: Does anyone else know of other Crates…

I did look at that, it’s “just” doing things like wrapping the system-provided APIs for notification. It’s useful in scenarios where you might be reading from 1000 sockets at once, all of which are trickling data into a dozen server CPU cores doing the processing.
The Read2 API is much simpler than this conceptually, and is only tangentially related. E.g.: it might be feasible to extend it with a fn peek_any(...) API somehow via a related trait or something.
PS: http://www.aosabook.org/en/posa…

Yes, I agree, as I look more into mio.
Yes, it is.
I think it might be advantageous to end/close this thread and spawn off a new thread to do the following:
solidify the requirements
survey the existing crates.io efforts and how they do/do not meet the requirements
analyze C# PIpes API and Java NIO and perhaps others for ideas/inspiration and/or problems to avoid
spin up a repo to begin the Trait/Interface designs and make decisions about how this should all interoperate etc. (as I mention…

Idea: a Read-like trait that returns an owned buffer (like a buffer from the bytes crate). This sounds even more generic than BufRead / Read2 to me, because it means that use cases where the buffer must be owned by the caller for later usage are handled. In exchange, it makes the user feel less 0-cost, because there’ll be an additional layer of refcounting (but in exchange there’ll be true 0-cost available for the additional use cases like the kernel ring buffer).
FWIW, this is the approach tak…

As promised, I’ve spun up a new thread for further discussion on the IO issues here (Towards a more perfect RustIO) to provide a place to discuss the “Rust IO” issue. Further discussion on this thread should be be limited to other concerns (if any).

User @peter_bertok appears to be good at this kind of applied problem. The knowledge in this collection of links is slightly esoteric, so tread with due caution. Create and link new topics as required.