Current version of the proposal: https://github.com/golang/go/issues/67074#issuecomment-3201513059
Proposal Details
The io.Copy
and io.CopyBuffer
functions will use WriteTo
when the source supports it and ReadFrom
when the destination supports it.
There are cases when a type can efficiently support WriteTo
or ReadFrom
, but only some of the time. For example, CL 472475 added an WriteTo
method to *os.File
which performs a more efficient copy on Linux systems when the destination is a Unix or TCP socket. In all other cases, it falls back to io.Copy
.
The io.Copy
fallback means that io.CopyBuffer
with an *os.File
source will no longer use the provided buffer, because the buffer is not plumbed through to WriteTo
. (It also resulted in https://go.dev/issue/66988, in which the fallback resulted in the fast path in *net.TCPConn.ReadFrom not being taken.)
There have been previous proposals to fix this problem:
* #21904: Rejected.
* #16474: Discussion ongoing, but this wouldn't address the io.Copy
case.
Those issues also contain some links to various other instances of this problem turning up, but I think the *os.File.WriteTo
case is a sufficient example of a problem that needs solving, since io.CopyBuffer
no longer works as expected on files.
I propose that we add two new interfaces to the io
package:
package io
// CopierTo is the interface that wraps the CopyTo method.
//
// CopyTo writes data to w until there is no more data to write or when an error occurs.
// The return value n is the number of bytes written.
// Any error encountered during the write is also returned.
//
// When len(buf) > 0, CopyTo may use the provided buffer as temporary space.
//
// CopyTo may return an error wrapping errors.ErrUnsupported to indicate that
// it is unable to efficiently copy to w.
//
// The Copy function uses CopierTo if available.
type CopierTo interface {
CopyTo(w Writer, buf []byte) (n int64, err error)
}
// CopierFrom is the interface that wraps the CopyFrom method.
//
// CopyFrom reads data from r until EOF or error.
// The return value n is the number of bytes read.
// Any error except EOF encountered during the read is also returned.
//
// When len(buf) > 0, CopyFrom may use the provided buffer as temporary space.
//
// CopyFrom may return an error wrapping errors.ErrUnsupported to indicate that
// it is unable to efficiently copy from r.
//
// The Copy function uses CopierFrom if available.
type CopierFrom interface {
CopyFrom(r Reader, buf []byte) (n int64, err error)
}
The CopierTo
and CopierFrom
interfaces supersede WriterTo
and ReaderFrom
when available. They provide a way to plumb the copy buffer down through the copy operation, and permit an implementation to implement a fast path for some subset of sources or destinations while deferring to io.Copy
for other cases.
We will update io.Copy
and io.CopyBuffer
to prefer CopierTo
and CopierFrom
when available.
We will update *os.File
and *net.TCPConn
(and possibly other types) to add CopyTo
and CopyFrom
methods.
Comment From: jfrech
Is CopyTo guaranteed to have had no observable effect when it returns an errors.Is(,errors.ErrUnsupported) error?
Comment From: neild
Is CopyTo guaranteed to have had no observable effect when it returns an errors.Is(,errors.ErrUnsupported) error?
Yes.
Comment From: jfrech
Maybe I'm missing the point but what precisely is the issue with solving this how you opt out of any unwanted type unerasure by forcing a barrier struct (as already mentioned in #16474)? If you are using io.CopyBuffer, chances are you specially prepared for the buffer, so we are talking about a deliberate, pin-pointed optimization, not implicit system-wide optimizations which io.WriterTo do help with.
io.CopyBuffer is not alone in exhibiting behaviour not expressible in the type system which one needs to be wary about: compress/zlib.NewReader may discard data if the reader cannot unerase to an exact one and bufio.(*Reader).WriteTo can even call ReadFrom (although undocumented). If you want to prohibit a function from utilizing its fast paths, hide your own capabilities.
Comment From: neild
The motivation for this proposal is to fix io.Copy
when used with common standard library types such as *os.File
and *net.TCPConn
.
It's perhaps not immediately obvious that io.Copy
is broken. Let's look at a concrete example from the net/http
package (https://go.googlesource.com/go/+/refs/tags/go1.22.2/src/net/http/transfer.go#412):
func (t *transferWriter) doBodyCopy(dst io.Writer, src io.Reader) (n int64, err error) {
buf := getCopyBuf()
defer putCopyBuf(buf)
n, err = io.CopyBuffer(dst, src, buf)
if err != nil && err != io.EOF {
t.bodyReadError = err
}
return
}
This function copies from a user-provided io.Reader
to an io.Writer
. It doesn't know anything about the source and destination, although the Writer
has usually been created by the net/http
package. It uses io.CopyBuffer
and does buffer pooling to minimize allocations. The copy operation takes advantage of sendfile
when available.
Let's consider the case where we're on macOS, copying from an *os.File
created by os.Pipe
, and to a *net.TCPConn
.
In this case:
- doBodyCopy allocates a copy buffer from its pool.
- io.CopyBuffer notices that the source supports
WriteTo
and calls it. *os.File.WriteTo
doesn't have any special handling on macOS, so it wraps the source in a type that removes theWriteTo
method and callsio.Copy
.io.Copy
notices that the destination supportsReadFrom
and calls it.*net.TCPConn.ReadFrom
calls sendfile, which fails, because macOS doesn't support sendfile from a pipe.*net.TCPConn.ReadFrom
wraps the destination in a type that removes theReadFrom
method and callsio.Copy
.fio.Copy
allocates a buffer and does the copy.
Note that there are three io.Copy
operations in the call stack. We lost the user-allocated buffer provided to CopyBuffer
after the first one. It is entirely not obvious to the original caller (doBodyCopy
) that the buffer provided to CopyBuffer
won't be used; doBodyCopy
was, after all, operating on a plain io.Reader
and io.Writer
.
We could change doBodyCopy
to mask any ReadFrom
or WriteTo
methods on the reader and writer, at the cost of disabling the sendfile optimization when we do want it. Requiring every caller of io.CopyBuffer
everywhere to do this would be unfortunate.
This is a mess.
At a high level, I have two goals. When using common types from the standard library (files, network sockets):
io.Copy
(and related functions) should not recursively call itself; andio.CopyBuffer
should use the buffer provided to it.
Arguably, the recursive calls to Copy
aren't necessarily a problem, but they add so much confusion to the call stack that it has become very difficult to understand what's actually going on in a copy operation. The CopyBuffer
buffer getting lost is, I think, at this point a straight up bug: As of Go 1.22, CopyBuffer
with an *os.File
as the source will never use the provided buffer. The fact that this changed between Go 1.21 and 1.22 seems to me to be a plain regression.
The Go compatibility promise closes off most avenues for fixing this:
- We document
io.Copy
as callingReadFrom
/WriteTo
when available. - Changing
io.CopyBuffer
to not callReadFrom
/WriteTo
seems likely to cause regressions in code that expects the current behavior. - We can't remove the
WriteTo
method from*os.File
. - We can't remove the
ReadFrom
method from*net.TCPConn
. - We can't start returning
errors.ErrUnsupported
from existingReadFrom
orWriteTo
methods.
(Of these options, the last one seems like the most feasible: If could return ErrUnsupported
from *os.File.WriteTo
and *net.TCPConn.ReadFrom
when the fast sendfile path is not available, then we can get by without any new APIs. That was #21904, which was rejected. Perhaps it should be reopened?)
The root of the problem is that WriteTo
and ReadFrom
aren't the right interface between io.Copy
and an io.Reader
/io.Writer
:
- Support for fast-path copies may be conditional on the pairing of (source, destination), and the
WriteTo
/ReadFrom
are necessarily associated with only one or the other. There needs to be a way for a type to support fast copies, with a fallback to the usual copy path. - If a type needs a buffer to support its copy, it has
WriteTo
/ReadFrom
have no access to the buffer provided toio.CopyBuffer
.
The proposed CopierTo
/CopierFrom
aim to fix io.Copy
on os
and net
package types by providing an interface that does include the necessary features.
Adding yet more magic methods to dig us out of the hole caused by the existing magic methods does seem somewhat odd. It's quite possible that if we were designing io.Copy
from the start, there would be a better way to do this. But I don't see another way out of this given the current constraints, I believe this proposal does address the problem, and I think that the addition of *os.File.WriteTo
in Go 1.22 has put us at a point where surprising io.Copy
behavior is common enough that we need to do something to address the situation.
Comment From: ianlancetaylor
We could constrain this by using an internal package, and by looking for methods that return types defined in that internal package. That would mean that only types in the standard library would implement the new interfaces. On the one hand that would prevent other packages from taking advantage of these new methods. On the other hand it would constrain the additional complexity.
For example
package iofd
type FD uintptr
type SysFDer interface {
SysFD() FD
CopyBetween(dst, src FD, buf []byte) (int64, bool, error)
}
Then in package io
srcfd, srcOK := src.(iofd.SysFDer)
dstfd, dstOK := dst.(iofd.SysFDer)
if srcOK && dstOK {
len, handled, err := srcfd.CopyBetween(dstfd, srcfd, buf)
if handled {
return len, err
}
}
Comment From: neild
We could constrain this to be internal-only, but I don't think it's worth it. We'd still have the complexity of the new methods, we'd just not be allowing anyone else to take advantage of the benefits.
Comment From: ianlancetaylor
I think the complexity of the new methods is significantly less if nobody else can implement them. In particular we can design the new methods so that they work on the pair of source and destination, which is what we need. One of the problems with the current methods is that they only work on the source or on the destination, which is how we've gotten into the position of needing to use both, and therefore needing to turn off both, and therefore increasing complexity.
Comment From: neild
69097 is another example of things going wrong in the current tangle of ReaderFroms/WriterTos. I'm pretty sure this also broke with the addition of *os.File.WriteTo in 1.22.
Comment From: neild
In particular we can design the new methods so that they work on the pair of source and destination, which is what we need.
I don't follow this. Any method we add will need to be called on one of the source or destination, with the other side of the copy as a parameter. I don't see the reason to pass srcfd as a parameter in the example; srcfd is already the receiver, so we're just passing it to the method twice:
len, handled, err := srcfd.CopyBetween(dstfd, srcfd, buf)
Also, requiring that both source and destination implement SysFDer is limiting. Currently, the Linux File.ReadFrom checks to see if the source is an io.LimitedReader and extracts the inner reader when available ( https://go.googlesource.com/go/+/refs/tags/go1.23.0/src/os/zero_copy_linux.go#73). Either we'd have to make LimitedReader implement SysFDer or drop the ability to splice from a size-limited source.
The problem with ReaderFrom/WriterTo isn't that they operate on only the source or destination, it's that they have no access to the copy buffer and no way to fall back if the fast path isn't available. I don't think fixing those problems adds complexity; instead it removes it.
Comment From: ianlancetaylor
You're right, there is no reason to make CopyBetween
a method of SysFDer
. It should just be a top level internal/poll.CopyBetween
function that copies between two file descriptors. We can handle LimitedReader
(and CopyN
) by also passing a maximum length.
I think there is a problem with ReaderFrom
and WriterTo
that you aren't mentioning: in order to use them to invoke sendfile
or copy_file_range
, whoever implements the methods needs to dig through the argument to find out what it can do. That's the step I'm trying to avoid by SysFDer
. I agree that this does not help with the io.CopyBuffer
problem.
Comment From: neild
It'd be nice to fix the fast paths that stopped working in 1.22. (net/http has a path which uses io.CopyBuffer with a pooled buffer where the buffer is ignored, #69097 is a sendfile path that no longer works.)
I think there are two orthogonal points in the discussion above:
ReaderFrom
andWriterTo
aren't the right interface betweenio.Copy(Buffer)
and types with a fast-path copy method. They don't allow a type to implement conditional fast copy (e.g., fast copy to/from only certain other types) and they don't plumb through the copy buffer.- Our implementation of
sendfile
(orsendfile
-like) fast paths is mostly a pile of special-cases where each type (net.TCPConn, io.File) checks for some other list of types that it can copy to/from.
SysFDer
seems like a possible solution to the second problem.
I don't think it's a solution to the first problem. I don't think the right fix here is to define a fast path that only works for types in the standard library. It should be possible for someone to write a type outside the standard library with a fast-path copy method that uses the buffer passed to io.CopyBuffer
, or which implements a conditional fast-path that falls back to the regular copy some of the time without recursing.
Comment From: ianlancetaylor
ReadFrom
and WriteTo
were added in https://go.dev/cl/166041, to permit io.Copy
to do I/O to and from a bytes.Buffer
without having to allocate a buffer to do the copy. In https://go.dev/cl/4543071 we extended this mechanism to use ReadFrom
and WriteTo
to wrap the sendfile
system call. Then in https://go.dev/cl/8730 we added io.CopyBuffer
, another way to copy data without allocating a buffer.
While the individual steps are reasonable, in retrospect this looks like a mistake.
The ReadFrom
and WriteTo
methods are exactly what we want for a particular case: a data structure that fully controls its own data and allocation, such as bytes.Buffer
, bytes.Reader
, strings.Reader
. Types like that don't need an extra buffer, so they correctly ignore the buffer passed to io.CopyBuffer
.
However, these methods are not what we want for accessing special system calls like sendfile
, copy_file_range
, or splice
. Those special system calls do not need a buffer when they are available. But they are not always available. What we need here is a mechanism that says "read up to N (or unlimited) bytes from this source, but only if you can do it through magic". Or we could instead create a mechanism that writes data, but we don't need both reading and writing. That's because the goal here is to use magic, and that means that the types must understand each other. That is basically what the unexported function net.sendFile
is: a function that magically reads from certain source types.
But at least in the standard library I don't think there are any other cases of ReadFrom
or WriteTo
. Please do let me know if I'm mistaken. But if I'm not mistaken, that means that all real implementations of ReadFrom
and WriteTo
do not need a buffer passed in.
So why are we thinking about passing in a buffer? Because the magic system calls are not always available; they depend on the types being copied between. And in our current implementation, if the magic system calls are not available, they fall back on doing the copy the old-fashioned way. And when that happens, they should use the buffer passed to io.CopyBuffer
.
So we don't need CopyTo
or CopyFrom
. What we need is a way to say "use a magic function if available, and just return if not." Something like
// DataSender may be implemented by a data source.
// The SendData method sends data to w, sending up to n bytes.
// If n is negative, it sends all available bytes.
// It returns the number of bytes sent.
// This is expected to use an efficient mechanism that depends on both
// the data source and w, such as the sendfile system call.
// If no efficient mechanism is available for the pair of the data source and w,
// this will return 0, [errors.ErrUnsupported].
// For a partial send, this will return a positive value along with an error.
type DataSender interface {
SendData(w Writer, n int64) (int64, error)
}
Note that we need not define a DataReceiver
interface. DataSender
is enough.
We could pass in an optional buffer if that seems useful. Today I think it is not useful, but perhaps someday it would be.
Then we change io.Copy
:
- check whether the reader implements DataSender
- if yes, call it; if it does not return errors.ErrUnsupported
, we are done
- if no, check whether the reader implements WriteTo
- if yes, call it; we are done
- check whether the writer implements DataSender
- if no, check whether the writer implements ReadFrom
- if yes, call it; we are done
- if we get here, copy using the buffer as usual
The goal of these steps is to use the SendData
method to block calling ReadFrom
or WriteTo
. That lets us implement magic system calls without having to fall back to an unbuffered io.Copy
. And we can still use the unbuffered methods of types like bytes.Buffer
when they are available.
Hopefully I haven't made a mistake somewhere.
Comment From: neild
Passing the number of bytes to copy is a good change. That should avoid an allocation in io.CopyN
, as opposed to wrapping a source in an io.LimitedReader
.
I think we should pass in the optional buffer. Even if we don't have any immediate need for it, passing both the number of bytes to copy and an optional buffer means the call includes all the parameters passed to io.Copy
, io.CopyBuffer
, and io.CopyN
. So long as we don't add more io.Copy*
functions, that should be reasonable future-proof.
Thinking through a few possible scenarios:
- copy from an
*os.File
to an*os.File
: we callsrc.SendData(dst, -1)
- if the fast path can't be used, this returns an error and we fall back to a regular copy, instead of recursing
- copy from a
*bytes.Buffer
to an*os.File
: we callsrc.WriteTo(dst)
- copy from an
*os.File
to a*bytes.Buffer
: we calldst.ReadFrom(src)
- this is an improvement on today, since we skip a recursive
io.Copy
call - copy from a
*bytes.Buffer
to a*bytes.Buffer
: we callsrc.WriteTo(dst)
I think that all works.
The one place where it seems like a single SendData
method (as opposed to paired SendData
/ReceiveData
methods) might be insufficient is if both directions of the copy can't be implemented without a dependency cycle. For example, *os.File
and *net.TCPConn
mutually support fast-copy, but net
depends on os
. I don't think that's a problem in the case of os
/net
, however, because we already break that cycle using the syscall.Conn
interface.
I like the name CopyTo
as clearly indicating that the method is invoked by io.Copy
functions. But on the other hand, CopyTo
reads more like a method that a user might want to call directly (which they should not do). SendData
may be a better name for avoiding misuse. Either way, we should clearly document that users should use io.Copy
, not SendData
directly.
With the optional buffer:
// DataSender may be implemented by a data source.
// The SendData method sends up to b bytes of data to w.
// If n is negative, it sends all available bytes.
// It returns the number of bytes sent.
//
// This is expected to use an efficient mechanism that depends on both
// the data source and w, such as the sendfile system call.
// If no efficient mechanism is available for the pair of the data source and w,
// this will return 0, [errors.ErrUnsupported].
// For a partial send, this will return a positive value along with an error.
//
// When len(buf) > 0, the SendData method may use it as temporary space.
//
// The [Copy] function uses DataSender when available.
type DataSender interface {
SendData(w Writer, n int64, buf []byte) (int64, error)
}
Comment From: jfrech
Is requiring the exact errors.ErrUnsupported
not too restrictive? I would have expected DataSender
to be required to return 0, err
where errors.Is(err, errors.ErrUnsupported)
holds.
Comment From: neild
I don't see any reason to permit wrapped errors here; it makes it less efficient to check for the not-supported status without any real benefit. There isn't additional information to provide, and nobody is ever going to see the text of the error.
Comment From: panjf2000
If I'm reading this proposal correctly, all zero-copy system calls will be migrated to a data source's DataSender
and its ReaderFrom
/WriterTo
should only do generic copies without any magic system calls, right? If so, I do think this is a better approach where we'd be able to gather the zero-copy system calls that currently scatter across multiple type methods, even source files.
Comment From: ianlancetaylor
I'm not entirely sure that we can change the behavior of the existing ReadFrom
and WriteTo
methods, but I hope that we can at least not complicate them further. All future zero-copy system calls would be done with SendData
.
Comment From: hanwen-flow
There are cases when a type can efficiently support WriteTo or ReadFrom, but only some of the time
wanted to add my $0.02 to this discussion: I am also interested in cases where WriteTo
/ ReadFrom
work some of the time for the same pair of sink/source combination. For example, in zip (https://go-review.googlesource.com/c/go/+/667275) and tar (https://github.com/golang/go/issues/70807), metadata would be written using normal Write
calls, while file contents can benefit from magic copying.
It would also be great if 'magic' copying could be supported (seamlessly?) by io.Pipe(), ie. if you do file system => tar.Writer => io.PipeWriter => io.PipeReader => file system, then the entire pipeline should use magic copying for file contents. (See https://github.com/golang/go/issues/54877)
The latter requirement suggests that a DataSender
interface may not be the full answer, because a DataSender
implementation cannot anticipate on all the ways that a destination stream might be wrapped.
Comment From: hanwen-flow
How about something like
type ErrMagicNotNow struct {
// If > 0, read precisely N bytes before attempting a magic
// copy again. This can be used for restrictions on offset
// alignment (eg. reflink copying). If =0, try a single read.
N int
}
func (e *ErrMagicNotNow) Error() {
return "supports magic copy, but not now"
}
type MagicDestFder interface {
// Returns a (file-descriptor, offset pair) for kernel-level
// data copying. If offset is nil, use the current offset of
// the fd. If we used in-kernel copying, the number of copied
// bytes, and any error is passed to the report function.
// If this returns ErrMagicNotNow, fall back to normal copy.
DestFd() (fd uintptr, offset *int64, report func(int64, error), err error)
}
type MagicSourceFder interface {
// Returns a (file-descriptor, offset pair) for OS-level data
// copying. If offset is nil, use the current offset of the
// fd. The size is the maximum number of bytes that may copied
// If ErrMagicNotNow is returned, fall back to normal copy for
// one write. In this case, report is not called.
SourceFd() (fd uintptr, off *int64, size int64, report func(int64, error), err error)
}
// Copy src to dst, using fallback if src returns ErrMagicNotNow. The
// fallback copy uses the given buffer.
func MagicCopy(dst io.MagicDestFder, src io.MagicSourceFder, buffer []byte) (int64, error)
// io.Copy ...
//
// if src and dst implement MagicSourceFder and MagicDestFder, a
// MagicCopy is attempted.
//
// if src is a MagicSourceFder and dst is a Pipe, src is passed onto
// the reading end of the pipe.
func Copy(dst io.Writer, src io.Reader) (int, error)
This interface isn't very pretty, but I think it would get the job done. The report
functions are there to enable wrappers that keep track of number of bytes written, or last I/O error.
At first I was not sure if destinations should support ErrMagicNotNow
, but consider zip.Writer
: that has an internal 4kb buffer. If we're in the middle of writing metadata, we should refuse zero-copy rather than flush the buffer.
Comment From: hanwen-flow
At first I was not sure if destinations should support ErrMagicNotNow, but consider zip.Writer: that has an internal 4kb buffer. If we're in the middle of writing metadata, we should refuse zero-copy rather than flush the buffer.
Also, in case of mandatory alignment for magic copy, we could check both source and destination. If they specify different Ns, that means that the copy can never work magically, and MagicCopy
should fail.
Comment From: erincandescent
I'm in favour of keeping the buffer parameter. I can see various scenarios where it's essential:
copy_file_range
implementations often have alignment requirements and so it might be necessary to do some portion of an optimizable copy through a buffer in order to hit those requirements- If we're copying a sparse file between two filesystems, then obviously I want to use
fallocate
to punch holes in the destination file where the source file has holes, but I am maybe not going to otherwise be able to optimize things
IMO instead of passing n < 0
for "continue until end" it makes more sense to pass math.MaxInt64
. This lets implementations unconditionally use min(x, n)
when dealing with other limits. You can't meaningfully copy more anyway (you'll overflow the return value)
I think you need a ReadFrom equivalent. Without it, we'll just have the same problems with dropped optimizations just because a socket/file/etc was wrapped in a *bufio.Writer
(obviously, *bufio.Reader
will need to sprout at least knowledge of SendData
so that we don't end up back in the wrapped reader'sWriteTo
and then back in io.Copy
...)
Comment From: neild
Current version of the proposal:
The io.Copy
and io.CopyBuffer
functions will use WriteTo
when the source supports it and ReadFrom
when the destination supports it.
There are cases when a type can efficiently support WriteTo or ReadFrom, but only some of the time. For example, *os.File
has both WriteTo
and ReadFrom
methods which will, under some circumstances, perform a more efficient copy using sendfile
, splice
, or similar OS APIs.
The presence of the WriteTo
and ReadFrom
methods on *os.File
means that io.CopyBuffer
operations will never use the provided buffer. In addition, an io.Copy
involving an *os.File
can invoke up to three recursive io.Copy
calls before performing the actual copy. (See https://github.com/golang/go/issues/67074#issuecomment-2083482724 for details.)
The goal of this proposal is to:
- support fast-path
io.Copy
(using, for example,sendfile
) when available - permit the fast-path to fall back to a regular copy without recursively invoking
io.Copy
- efficiently plumb the buffer provided to
io.CopyBuffer
and the length provided toio.CopyN
through to the fast-path copy function
We add the following interface to the io
package:
// CopierTo may be implemented by a data source.
// The CopyTo method sends up to b bytes of data to w.
// If n is negative, it sends all available bytes.
// It returns the number of bytes sent.
//
// This is expected to use an efficient mechanism that depends on both
// the data source and w, such as the sendfile system call.
// If no efficient mechanism is available for the pair of the data source and w,
// this will return 0, [errors.ErrUnsupported].
// For a partial send, this will return a positive value along with an error.
//
// When len(buf) > 0, the CopyTo method may use it as temporary space.
//
// The [Copy], [CopyBuffer], and [CopyN] functions use CopierTo when available.
// User code should usually use these functions rather than calling CopyTo directly.
type CopierTo interface {
CopyTo(w Writer, n int64, buf []byte) (int64, error)
}
We add CopyTo
methods to *os.FIle
and *net.TCPConn
. In the documentation for these methods, we clearly state that users should use io.Copy
instead of calling them directly.
https://github.com/golang/go/issues/67074#issuecomment-2620478784 proposed that we call this interface and method DataSender
/SendData
. I think using the word "copy" makes it clearer that this method supports io.Copy
, but possibly it reads too much like a method the user should call (when users should use io.Copy
rather than calling CopyTo
). I don't have strong feelings about the name.